WorldWideScience

Sample records for neural networks included

  1. Improved exponential convergence result for generalized neural networks including interval time-varying delayed signals.

    Science.gov (United States)

    Rajchakit, G; Saravanakumar, R; Ahn, Choon Ki; Karimi, Hamid Reza

    2017-02-01

    This article examines the exponential stability analysis problem of generalized neural networks (GNNs) including interval time-varying delayed states. A new improved exponential stability criterion is presented by establishing a proper Lyapunov-Krasovskii functional (LKF) and employing new analysis theory. The improved reciprocally convex combination (RCC) and weighted integral inequality (WII) techniques are utilized to obtain new sufficient conditions to ascertain the exponential stability result of such delayed GNNs. The superiority of the obtained results is clearly demonstrated by numerical examples. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Neural Networks: Implementations and Applications

    NARCIS (Netherlands)

    Vonk, E.; Veelenturf, L.P.J.; Jain, L.C.

    1996-01-01

    Artificial neural networks, also called neural networks, have been used successfully in many fields including engineering, science and business. This paper presents the implementation of several neural network simulators and their applications in character recognition and other engineering areas

  3. Multiscale approach including microfibril scale to assess elastic constants of cortical bone based on neural network computation and homogenization method

    CERN Document Server

    Barkaoui, Abdelwahed; Tarek, Merzouki; Hambli, Ridha; Ali, Mkaddem

    2014-01-01

    The complexity and heterogeneity of bone tissue require a multiscale modelling to understand its mechanical behaviour and its remodelling mechanisms. In this paper, a novel multiscale hierarchical approach including microfibril scale based on hybrid neural network computation and homogenisation equations was developed to link nanoscopic and macroscopic scales to estimate the elastic properties of human cortical bone. The multiscale model is divided into three main phases: (i) in step 0, the elastic constants of collagen-water and mineral-water composites are calculated by averaging the upper and lower Hill bounds; (ii) in step 1, the elastic properties of the collagen microfibril are computed using a trained neural network simulation. Finite element (FE) calculation is performed at nanoscopic levels to provide a database to train an in-house neural network program; (iii) in steps 2 to 10 from fibril to continuum cortical bone tissue, homogenisation equations are used to perform the computation at the higher s...

  4. Neural Networks

    Directory of Open Access Journals (Sweden)

    Schwindling Jerome

    2010-04-01

    Full Text Available This course presents an overview of the concepts of the neural networks and their aplication in the framework of High energy physics analyses. After a brief introduction on the concept of neural networks, the concept is explained in the frame of neuro-biology, introducing the concept of multi-layer perceptron, learning and their use as data classifer. The concept is then presented in a second part using in more details the mathematical approach focussing on typical use cases faced in particle physics. Finally, the last part presents the best way to use such statistical tools in view of event classifers, putting the emphasis on the setup of the multi-layer perceptron. The full article (15 p. corresponding to this lecture is written in french and is provided in the proceedings of the book SOS 2008.

  5. Introduction to neural networks

    CERN Document Server

    James, Frederick E

    1994-02-02

    1. Introduction and overview of Artificial Neural Networks. 2,3. The Feed-forward Network as an inverse Problem, and results on the computational complexity of network training. 4.Physics applications of neural networks.

  6. Morphological neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Ritter, G.X.; Sussner, P. [Univ. of Florida, Gainesville, FL (United States)

    1996-12-31

    The theory of artificial neural networks has been successfully applied to a wide variety of pattern recognition problems. In this theory, the first step in computing the next state of a neuron or in performing the next layer neural network computation involves the linear operation of multiplying neural values by their synaptic strengths and adding the results. Thresholding usually follows the linear operation in order to provide for nonlinearity of the network. In this paper we introduce a novel class of neural networks, called morphological neural networks, in which the operations of multiplication and addition are replaced by addition and maximum (or minimum), respectively. By taking the maximum (or minimum) of sums instead of the sum of products, morphological network computation is nonlinear before thresholding. As a consequence, the properties of morphological neural networks are drastically different than those of traditional neural network models. In this paper we consider some of these differences and provide some particular examples of morphological neural network.

  7. A density-functional theory-based neural network potential for water clusters including van der Waals corrections.

    Science.gov (United States)

    Morawietz, Tobias; Behler, Jörg

    2013-08-15

    The fundamental importance of water for many chemical processes has motivated the development of countless efficient but approximate water potentials for large-scale molecular dynamics simulations, from simple empirical force fields to very sophisticated flexible water models. Accurate and generally applicable water potentials should fulfill a number of requirements. They should have a quality close to quantum chemical methods, they should explicitly depend on all degrees of freedom including all relevant many-body interactions, and they should be able to describe molecular dissociation and recombination. In this work, we present a high-dimensional neural network (NN) potential for water clusters based on density-functional theory (DFT) calculations, which is constructed using clusters containing up to 10 monomers and is in principle able to meet all these requirements. We investigate the reliability of specific parametrizations employing two frequently used generalized gradient approximation (GGA) exchange-correlation functionals, PBE and RPBE, as reference methods. We find that the binding energy errors of the NN potentials with respect to DFT are significantly lower than the typical uncertainties of DFT calculations arising from the choice of the exchange-correlation functional. Further, we examine the role of van der Waals interactions, which are not properly described by GGA functionals. Specifically, we incorporate the D3 scheme suggested by Grimme (J. Chem. Phys. 2010, 132, 154104) in our potentials and demonstrate that it can be applied to GGA-based NN potentials in the same way as to DFT calculations without modification. Our results show that the description of small water clusters provided by the RPBE functional is significantly improved if van der Waals interactions are included, while in case of the PBE functional, which is well-known to yield stronger binding than RPBE, van der Waals corrections lead to overestimated binding energies.

  8. Neural networks and statistical learning

    CERN Document Server

    Du, Ke-Lin

    2014-01-01

    Providing a broad but in-depth introduction to neural network and machine learning in a statistical framework, this book provides a single, comprehensive resource for study and further research. All the major popular neural network models and statistical learning approaches are covered with examples and exercises in every chapter to develop a practical working understanding of the content. Each of the twenty-five chapters includes state-of-the-art descriptions and important research results on the respective topics. The broad coverage includes the multilayer perceptron, the Hopfield network, associative memory models, clustering models and algorithms, the radial basis function network, recurrent neural networks, principal component analysis, nonnegative matrix factorization, independent component analysis, discriminant analysis, support vector machines, kernel methods, reinforcement learning, probabilistic and Bayesian networks, data fusion and ensemble learning, fuzzy sets and logic, neurofuzzy models, hardw...

  9. Hidden neural networks

    DEFF Research Database (Denmark)

    Krogh, Anders Stærmose; Riis, Søren Kamaric

    1999-01-01

    A general framework for hybrids of hidden Markov models (HMMs) and neural networks (NNs) called hidden neural networks (HNNs) is described. The article begins by reviewing standard HMMs and estimation by conditional maximum likelihood, which is used by the HNN. In the HNN, the usual HMM probability...... parameters are replaced by the outputs of state-specific neural networks. As opposed to many other hybrids, the HNN is normalized globally and therefore has a valid probabilistic interpretation. All parameters in the HNN are estimated simultaneously according to the discriminative conditional maximum...... likelihood criterion. The HNN can be viewed as an undirected probabilistic independence network (a graphical model), where the neural networks provide a compact representation of the clique functions. An evaluation of the HNN on the task of recognizing broad phoneme classes in the TIMIT database shows clear...

  10. Neural Network Ensembles

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Salamon, Peter

    1990-01-01

    We propose several means for improving the performance an training of neural networks for classification. We use crossvalidation as a tool for optimizing network parameters and architecture. We show further that the remaining generalization error can be reduced by invoking ensembles of similar...... networks....

  11. Critical Branching Neural Networks

    Science.gov (United States)

    Kello, Christopher T.

    2013-01-01

    It is now well-established that intrinsic variations in human neural and behavioral activity tend to exhibit scaling laws in their fluctuations and distributions. The meaning of these scaling laws is an ongoing matter of debate between isolable causes versus pervasive causes. A spiking neural network model is presented that self-tunes to critical…

  12. Neural network applications

    Science.gov (United States)

    Padgett, Mary L.; Desai, Utpal; Roppel, T.A.; White, Charles R.

    1993-01-01

    A design procedure is suggested for neural networks which accommodates the inclusion of such knowledge-based systems techniques as fuzzy logic and pairwise comparisons. The use of these procedures in the design of applications combines qualitative and quantitative factors with empirical data to yield a model with justifiable design and parameter selection procedures. The procedure is especially relevant to areas of back-propagation neural network design which are highly responsive to the use of precisely recorded expert knowledge.

  13. Complex-Valued Neural Networks

    CERN Document Server

    Hirose, Akira

    2012-01-01

    This book is the second enlarged and revised edition of the first successful monograph on complex-valued neural networks (CVNNs) published in 2006, which lends itself to graduate and undergraduate courses in electrical engineering, informatics, control engineering, mechanics, robotics, bioengineering, and other relevant fields. In the second edition the recent trends in CVNNs research are included, resulting in e.g. almost a doubled number of references. The parametron invented in 1954 is also referred to with discussion on analogy and disparity. Also various additional arguments on the advantages of the complex-valued neural networks enhancing the difference to real-valued neural networks are given in various sections. The book is useful for those beginning their studies, for instance, in adaptive signal processing for highly functional sensing and imaging, control in unknown and changing environment, robotics inspired by human neural systems, and brain-like information processing, as well as interdisciplina...

  14. Hyperbolic Hopfield neural networks.

    Science.gov (United States)

    Kobayashi, M

    2013-02-01

    In recent years, several neural networks using Clifford algebra have been studied. Clifford algebra is also called geometric algebra. Complex-valued Hopfield neural networks (CHNNs) are the most popular neural networks using Clifford algebra. The aim of this brief is to construct hyperbolic HNNs (HHNNs) as an analog of CHNNs. Hyperbolic algebra is a Clifford algebra based on Lorentzian geometry. In this brief, a hyperbolic neuron is defined in a manner analogous to a phasor neuron, which is a typical complex-valued neuron model. HHNNs share common concepts with CHNNs, such as the angle and energy. However, HHNNs and CHNNs are different in several aspects. The states of hyperbolic neurons do not form a circle, and, therefore, the start and end states are not identical. In the quantized version, unlike complex-valued neurons, hyperbolic neurons have an infinite number of states.

  15. Practical neural network recipies in C++

    CERN Document Server

    Masters

    2014-01-01

    This text serves as a cookbook for neural network solutions to practical problems using C++. It will enable those with moderate programming experience to select a neural network model appropriate to solving a particular problem, and to produce a working program implementing that network. The book provides guidance along the entire problem-solving path, including designing the training set, preprocessing variables, training and validating the network, and evaluating its performance. Though the book is not intended as a general course in neural networks, no background in neural works is assum

  16. Introduction to Artificial Neural Networks

    DEFF Research Database (Denmark)

    Larsen, Jan

    1999-01-01

    The note addresses introduction to signal analysis and classification based on artificial feed-forward neural networks.......The note addresses introduction to signal analysis and classification based on artificial feed-forward neural networks....

  17. Deconvolution using a neural network

    Energy Technology Data Exchange (ETDEWEB)

    Lehman, S.K.

    1990-11-15

    Viewing one dimensional deconvolution as a matrix inversion problem, we compare a neural network backpropagation matrix inverse with LMS, and pseudo-inverse. This is a largely an exercise in understanding how our neural network code works. 1 ref.

  18. Artificial neural network modelling

    CERN Document Server

    Samarasinghe, Sandhya

    2016-01-01

    This book covers theoretical aspects as well as recent innovative applications of Artificial Neural networks (ANNs) in natural, environmental, biological, social, industrial and automated systems. It presents recent results of ANNs in modelling small, large and complex systems under three categories, namely, 1) Networks, Structure Optimisation, Robustness and Stochasticity 2) Advances in Modelling Biological and Environmental Systems and 3) Advances in Modelling Social and Economic Systems. The book aims at serving undergraduates, postgraduates and researchers in ANN computational modelling. .

  19. Neural network technologies

    Science.gov (United States)

    Villarreal, James A.

    1991-01-01

    A whole new arena of computer technologies is now beginning to form. Still in its infancy, neural network technology is a biologically inspired methodology which draws on nature's own cognitive processes. The Software Technology Branch has provided a software tool, Neural Execution and Training System (NETS), to industry, government, and academia to facilitate and expedite the use of this technology. NETS is written in the C programming language and can be executed on a variety of machines. Once a network has been debugged, NETS can produce a C source code which implements the network. This code can then be incorporated into other software systems. Described here are various software projects currently under development with NETS and the anticipated future enhancements to NETS and the technology.

  20. Neural networks for triggering

    Energy Technology Data Exchange (ETDEWEB)

    Denby, B. (Fermi National Accelerator Lab., Batavia, IL (USA)); Campbell, M. (Michigan Univ., Ann Arbor, MI (USA)); Bedeschi, F. (Istituto Nazionale di Fisica Nucleare, Pisa (Italy)); Chriss, N.; Bowers, C. (Chicago Univ., IL (USA)); Nesti, F. (Scuola Normale Superiore, Pisa (Italy))

    1990-01-01

    Two types of neural network beauty trigger architectures, based on identification of electrons in jets and recognition of secondary vertices, have been simulated in the environment of the Fermilab CDF experiment. The efficiencies for B's and rejection of background obtained are encouraging. If hardware tests are successful, the electron identification architecture will be tested in the 1991 run of CDF. 10 refs., 5 figs., 1 tab.

  1. Neural networks and applications tutorial

    Science.gov (United States)

    Guyon, I.

    1991-09-01

    The importance of neural networks has grown dramatically during this decade. While only a few years ago they were primarily of academic interest, now dozens of companies and many universities are investigating the potential use of these systems and products are beginning to appear. The idea of building a machine whose architecture is inspired by that of the brain has roots which go far back in history. Nowadays, technological advances of computers and the availability of custom integrated circuits, permit simulations of hundreds or even thousands of neurons. In conjunction, the growing interest in learning machines, non-linear dynamics and parallel computation spurred renewed attention in artificial neural networks. Many tentative applications have been proposed, including decision systems (associative memories, classifiers, data compressors and optimizers), or parametric models for signal processing purposes (system identification, automatic control, noise canceling, etc.). While they do not always outperform standard methods, neural network approaches are already used in some real world applications for pattern recognition and signal processing tasks. The tutorial is divided into six lectures, that where presented at the Third Graduate Summer Course on Computational Physics (September 3-7, 1990) on Parallel Architectures and Applications, organized by the European Physical Society: (1) Introduction: machine learning and biological computation. (2) Adaptive artificial neurons (perceptron, ADALINE, sigmoid units, etc.): learning rules and implementations. (3) Neural network systems: architectures, learning algorithms. (4) Applications: pattern recognition, signal processing, etc. (5) Elements of learning theory: how to build networks which generalize. (6) A case study: a neural network for on-line recognition of handwritten alphanumeric characters.

  2. Application of artificial neural network for vapor liquid equilibrium calculation of ternary system including ionic liquid: Water, ethanol and 1-butyl-3-methylimidazolium acetate

    Energy Technology Data Exchange (ETDEWEB)

    Fazlali, Alireza; Koranian, Parvaneh [Arak University, Arak (Iran, Islamic Republic of); Beigzadeh, Reza [Islamic Azad University, Kermanshah (Iran, Islamic Republic of); Rahimi, Masoud [Razi University, Kermanshah (Iran, Islamic Republic of)

    2013-09-15

    A feed forward three-layer artificial neural network (ANN) model was developed for VLE prediction of ternary systems including ionic liquid (IL) (water+ethanol+1-butyl-3- methyl-imidazolium acetate), in a relatively wide range of IL mass fractions up to 0.8, with the mole fractions of ethanol on IL-free basis fixed separately at 0.1, 0.2, 0.4, 0.6, 0.8, and 0.98. The output results of the ANN were the mole fraction of ethanol in vapor phase and the equilibrium temperature. The validity of the model was evaluated through a test data set, which were not employed in the training case of the network. The performance of the ANN model for estimating the mole fraction and temperature in the ternary system including IL was compared with the non-random-two-liquid (NRTL) and electrolyte non-random-two- liquid (eNRTL) models. The results of this comparison show that the ANN model has a superior performance in predicting the VLE of ternary systems including ionic liquid.

  3. Neural networks with discontinuous/impact activations

    CERN Document Server

    Akhmet, Marat

    2014-01-01

    This book presents as its main subject new models in mathematical neuroscience. A wide range of neural networks models with discontinuities are discussed, including impulsive differential equations, differential equations with piecewise constant arguments, and models of mixed type. These models involve discontinuities, which are natural because huge velocities and short distances are usually observed in devices modeling the networks. A discussion of the models, appropriate for the proposed applications, is also provided. This book also: Explores questions related to the biological underpinning for models of neural networks\\ Considers neural networks modeling using differential equations with impulsive and piecewise constant argument discontinuities Provides all necessary mathematical basics for application to the theory of neural networks Neural Networks with Discontinuous/Impact Activations is an ideal book for researchers and professionals in the field of engineering mathematics that have an interest in app...

  4. International Conference on Artificial Neural Networks (ICANN)

    CERN Document Server

    Mladenov, Valeri; Kasabov, Nikola; Artificial Neural Networks : Methods and Applications in Bio-/Neuroinformatics

    2015-01-01

    The book reports on the latest theories on artificial neural networks, with a special emphasis on bio-neuroinformatics methods. It includes twenty-three papers selected from among the best contributions on bio-neuroinformatics-related issues, which were presented at the International Conference on Artificial Neural Networks, held in Sofia, Bulgaria, on September 10-13, 2013 (ICANN 2013). The book covers a broad range of topics concerning the theory and applications of artificial neural networks, including recurrent neural networks, super-Turing computation and reservoir computing, double-layer vector perceptrons, nonnegative matrix factorization, bio-inspired models of cell communities, Gestalt laws, embodied theory of language understanding, saccadic gaze shifts and memory formation, and new training algorithms for Deep Boltzmann Machines, as well as dynamic neural networks and kernel machines. It also reports on new approaches to reinforcement learning, optimal control of discrete time-delay systems, new al...

  5. Fin-and-tube condenser performance evaluation using neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Ling-Xiao [Institute of Refrigeration and Cryogenics, Shanghai Jiaotong University, Shanghai 200240 (China); Zhang, Chun-Lu [China R and D Center, Carrier Corporation, No. 3239 Shen Jiang Road, Shanghai 201206 (China)

    2010-05-15

    The paper presents neural network approach to performance evaluation of the fin-and-tube air-cooled condensers which are widely used in air-conditioning and refrigeration systems. Inputs of the neural network include refrigerant and air-flow rates, refrigerant inlet temperature and saturated temperature, and entering air dry-bulb temperature. Outputs of the neural network consist of the heating capacity and the pressure drops on both refrigerant and air sides. The multi-input multi-output (MIMO) neural network is separated into multi-input single-output (MISO) neural networks for training. Afterwards, the trained MISO neural networks are combined into a MIMO neural network, which indicates that the number of training data sets is determined by the biggest MISO neural network not the whole MIMO network. Compared with a validated first-principle model, the standard deviations of neural network models are less than 1.9%, and all errors fall into {+-}5%. (author)

  6. Neural network optimization, components, and design selection

    Science.gov (United States)

    Weller, Scott W.

    1991-01-01

    Neural Networks are part of a revived technology which has received a lot of hype in recent years. As is apt to happen in any hyped technology, jargon and predictions make its assimilation and application difficult. Nevertheless, Neural Networks have found use in a number of areas, working on non-trivial and non-contrived problems. For example, one net has been trained to "read", translating English text into phoneme sequences. Other applications of Neural Networks include data base manipulation and the solving of routing and classification types of optimization problems. It was their use in optimization that got me involved with Neural Networks. As it turned out, "optimization" used in this context was somewhat misleading, because while some network configurations could indeed solve certain kinds of optimization problems, the configuring or "training" of a Neural Network itself is an optimization problem, and most of the literature which talked about Neural Nets and optimization in the same breath did not speak to my goal of using Neural Nets to help solve lens optimization problems. I did eventually apply Neural Network to lens optimization, and I will touch on those results. The application of Neural Nets to the problem of lens selection was much more successful, and those results will dominate this paper.

  7. [Artificial neural networks in Neurosciences].

    Science.gov (United States)

    Porras Chavarino, Carmen; Salinas Martínez de Lecea, José María

    2011-11-01

    This article shows that artificial neural networks are used for confirming the relationships between physiological and cognitive changes. Specifically, we explore the influence of a decrease of neurotransmitters on the behaviour of old people in recognition tasks. This artificial neural network recognizes learned patterns. When we change the threshold of activation in some units, the artificial neural network simulates the experimental results of old people in recognition tasks. However, the main contributions of this paper are the design of an artificial neural network and its operation inspired by the nervous system and the way the inputs are coded and the process of orthogonalization of patterns.

  8. Medical image analysis with artificial neural networks.

    Science.gov (United States)

    Jiang, J; Trundle, P; Ren, J

    2010-12-01

    Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging. Copyright © 2010 Elsevier Ltd. All rights reserved.

  9. The LILARTI neural network system

    Energy Technology Data Exchange (ETDEWEB)

    Allen, J.D. Jr.; Schell, F.M.; Dodd, C.V.

    1992-10-01

    The material of this Technical Memorandum is intended to provide the reader with conceptual and technical background information on the LILARTI neural network system of detail sufficient to confer an understanding of the LILARTI method as it is presently allied and to facilitate application of the method to problems beyond the scope of this document. Of particular importance in this regard are the descriptive sections and the Appendices which include operating instructions, partial listings of program output and data files, and network construction information.

  10. Analysis of neural networks

    CERN Document Server

    Heiden, Uwe

    1980-01-01

    The purpose of this work is a unified and general treatment of activity in neural networks from a mathematical pOint of view. Possible applications of the theory presented are indica­ ted throughout the text. However, they are not explored in de­ tail for two reasons : first, the universal character of n- ral activity in nearly all animals requires some type of a general approach~ secondly, the mathematical perspicuity would suffer if too many experimental details and empirical peculiarities were interspersed among the mathematical investigation. A guide to many applications is supplied by the references concerning a variety of specific issues. Of course the theory does not aim at covering all individual problems. Moreover there are other approaches to neural network theory (see e.g. Poggio-Torre, 1978) based on the different lev­ els at which the nervous system may be viewed. The theory is a deterministic one reflecting the average be­ havior of neurons or neuron pools. In this respect the essay is writt...

  11. Neural Networks for Optimal Control

    DEFF Research Database (Denmark)

    Sørensen, O.

    1995-01-01

    Two neural networks are trained to act as an observer and a controller, respectively, to control a non-linear, multi-variable process.......Two neural networks are trained to act as an observer and a controller, respectively, to control a non-linear, multi-variable process....

  12. Neural Based Orthogonal Data Fitting The EXIN Neural Networks

    CERN Document Server

    Cirrincione, Giansalvo

    2008-01-01

    Written by three leaders in the field of neural based algorithms, Neural Based Orthogonal Data Fitting proposes several neural networks, all endowed with a complete theory which not only explains their behavior, but also compares them with the existing neural and traditional algorithms. The algorithms are studied from different points of view, including: as a differential geometry problem, as a dynamic problem, as a stochastic problem, and as a numerical problem. All algorithms have also been analyzed on real time problems (large dimensional data matrices) and have shown accurate solutions. Wh

  13. Application of neural networks in coastal engineering

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.

    methods. That is why it is becoming popular in various fields including coastal engineering. Waves and tides will play important roles in coastal erosion or accretion. This paper briefly describes the back-propagation neural networks and its application...

  14. Neural Networks in Control Applications

    DEFF Research Database (Denmark)

    Sørensen, O.

    The intention of this report is to make a systematic examination of the possibilities of applying neural networks in those technical areas, which are familiar to a control engineer. In other words, the potential of neural networks in control applications is given higher priority than a detailed...... examined, and it appears that considering 'normal' neural network models with, say, 500 samples, the problem of over-fitting is neglible, and therefore it is not taken into consideration afterwards. Numerous model types, often met in control applications, are implemented as neural network models...... Kalmann filter) representing state space description. The potentials of neural networks for control of non-linear processes are also examined, focusing on three different groups of control concepts, all considered as generalizations of known linear control concepts to handle also non-linear processes...

  15. An Optoelectronic Neural Network

    Science.gov (United States)

    Neil, Mark A. A.; White, Ian H.; Carroll, John E.

    1990-02-01

    We describe and present results of an optoelectronic neural network processing system. The system uses an algorithm based on the Hebbian learning rule to memorise a set of associated vector pairs. Recall occurs by the processing of the input vector with these stored associations in an incoherent optical vector multiplier using optical polarisation rotating liquid crystal spatial light modulators to store the vectors and an optical polarisation shadow casting technique to perform multiplications. Results are detected on a photodiode array and thresholded electronically by a controlling microcomputer. The processor is shown to work in autoassociative and heteroassociative modes with up to 10 stored memory vectors of length 64 (equivalent to 64 neurons) and a cycle time of 50ms. We discuss the limiting factors at work in this system, how they affect its scalability and the general applicability of its principles to other systems.

  16. Modular representation of layered neural networks.

    Science.gov (United States)

    Watanabe, Chihiro; Hiramatsu, Kaoru; Kashino, Kunio

    2018-01-01

    Layered neural networks have greatly improved the performance of various applications including image processing, speech recognition, natural language processing, and bioinformatics. However, it is still difficult to discover or interpret knowledge from the inference provided by a layered neural network, since its internal representation has many nonlinear and complex parameters embedded in hierarchical layers. Therefore, it becomes important to establish a new methodology by which layered neural networks can be understood. In this paper, we propose a new method for extracting a global and simplified structure from a layered neural network. Based on network analysis, the proposed method detects communities or clusters of units with similar connection patterns. We show its effectiveness by applying it to three use cases. (1) Network decomposition: it can decompose a trained neural network into multiple small independent networks thus dividing the problem and reducing the computation time. (2) Training assessment: the appropriateness of a trained result with a given hyperparameter or randomly chosen initial parameters can be evaluated by using a modularity index. And (3) data analysis: in practical data it reveals the community structure in the input, hidden, and output layers, which serves as a clue for discovering knowledge from a trained neural network. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Neural Networks in Control Applications

    DEFF Research Database (Denmark)

    Sørensen, O.

    The intention of this report is to make a systematic examination of the possibilities of applying neural networks in those technical areas, which are familiar to a control engineer. In other words, the potential of neural networks in control applications is given higher priority than a detailed...... study of the networks themselves. With this end in view the following restrictions have been made: - Amongst numerous neural network structures, only the Multi Layer Perceptron (a feed-forward network) is applied. - Amongst numerous training algorithms, only four algorithms are examined, all...... in a recursive form (sample updating). The simplest is the Back Probagation Error Algorithm, and the most complex is the recursive Prediction Error Method using a Gauss-Newton search direction. - Over-fitting is often considered to be a serious problem when training neural networks. This problem is specifically...

  18. Neural Networks in Control Applications

    DEFF Research Database (Denmark)

    Sørensen, O.

    simulated process and compared. The closing chapter describes some practical experiments, where the different control concepts and training methods are tested on the same practical process operating in very noisy environments. All tests confirm that neural networks also have the potential to be trained......The intention of this report is to make a systematic examination of the possibilities of applying neural networks in those technical areas, which are familiar to a control engineer. In other words, the potential of neural networks in control applications is given higher priority than a detailed...... study of the networks themselves. With this end in view the following restrictions have been made: - Amongst numerous neural network structures, only the Multi Layer Perceptron (a feed-forward network) is applied. - Amongst numerous training algorithms, only four algorithms are examined, all...

  19. Neural network modeling of emotion

    Science.gov (United States)

    Levine, Daniel S.

    2007-03-01

    This article reviews the history and development of computational neural network modeling of cognitive and behavioral processes that involve emotion. The exposition starts with models of classical conditioning dating from the early 1970s. Then it proceeds toward models of interactions between emotion and attention. Then models of emotional influences on decision making are reviewed, including some speculative (not and not yet simulated) models of the evolution of decision rules. Through the late 1980s, the neural networks developed to model emotional processes were mainly embodiments of significant functional principles motivated by psychological data. In the last two decades, network models of these processes have become much more detailed in their incorporation of known physiological properties of specific brain regions, while preserving many of the psychological principles from the earlier models. Most network models of emotional processes so far have dealt with positive and negative emotion in general, rather than specific emotions such as fear, joy, sadness, and anger. But a later section of this article reviews a few models relevant to specific emotions: one family of models of auditory fear conditioning in rats, and one model of induced pleasure enhancing creativity in humans. Then models of emotional disorders are reviewed. The article concludes with philosophical statements about the essential contributions of emotion to intelligent behavior and the importance of quantitative theories and models to the interdisciplinary enterprise of understanding the interactions of emotion, cognition, and behavior.

  20. Optimizing neural network models: motivation and case studies

    OpenAIRE

    Harp, S A; T. Samad

    2012-01-01

    Practical successes have been achieved  with neural network models in a variety of domains, including energy-related industry. The large, complex design space presented by neural networks is only minimally explored in current practice. The satisfactory results that nevertheless have been obtained testify that neural networks are a robust modeling technology; at the same time, however, the lack of a systematic design approach implies that the best neural network models generally  rem...

  1. Applications of Pulse-Coupled Neural Networks

    CERN Document Server

    Ma, Yide; Wang, Zhaobin

    2011-01-01

    "Applications of Pulse-Coupled Neural Networks" explores the fields of image processing, including image filtering, image segmentation, image fusion, image coding, image retrieval, and biometric recognition, and the role of pulse-coupled neural networks in these fields. This book is intended for researchers and graduate students in artificial intelligence, pattern recognition, electronic engineering, and computer science. Prof. Yide Ma conducts research on intelligent information processing, biomedical image processing, and embedded system development at the School of Information Sci

  2. Neural-like growing networks

    Science.gov (United States)

    Yashchenko, Vitaliy A.

    2000-03-01

    On the basis of the analysis of scientific ideas reflecting the law in the structure and functioning the biological structures of a brain, and analysis and synthesis of knowledge, developed by various directions in Computer Science, also there were developed the bases of the theory of a new class neural-like growing networks, not having the analogue in world practice. In a base of neural-like growing networks the synthesis of knowledge developed by classical theories - semantic and neural of networks is. The first of them enable to form sense, as objects and connections between them in accordance with construction of the network. With thus each sense gets a separate a component of a network as top, connected to other tops. In common it quite corresponds to structure reflected in a brain, where each obvious concept is presented by certain structure and has designating symbol. Secondly, this network gets increased semantic clearness at the expense owing to formation not only connections between neural by elements, but also themselves of elements as such, i.e. here has a place not simply construction of a network by accommodation sense structures in environment neural of elements, and purely creation of most this environment, as of an equivalent of environment of memory. Thus neural-like growing networks are represented by the convenient apparatus for modeling of mechanisms of teleological thinking, as a fulfillment of certain psychophysiological of functions.

  3. Prediction of oxidation parameters of purified Kilka fish oil including gallic acid and methyl gallate by adaptive neuro-fuzzy inference system (ANFIS) and artificial neural network.

    Science.gov (United States)

    Asnaashari, Maryam; Farhoosh, Reza; Farahmandfar, Reza

    2016-10-01

    As a result of concerns regarding possible health hazards of synthetic antioxidants, gallic acid and methyl gallate may be introduced as natural antioxidants to improve oxidative stability of marine oil. Since conventional modelling could not predict the oxidative parameters precisely, artificial neural network (ANN) and neuro-fuzzy inference system (ANFIS) modelling with three inputs, including type of antioxidant (gallic acid and methyl gallate), temperature (35, 45 and 55 °C) and concentration (0, 200, 400, 800 and 1600 mg L(-1) ) and four outputs containing induction period (IP), slope of initial stage of oxidation curve (k1 ) and slope of propagation stage of oxidation curve (k2 ) and peroxide value at the IP (PVIP ) were performed to predict the oxidation parameters of Kilka oil triacylglycerols and were compared to multiple linear regression (MLR). The results showed ANFIS was the best model with high coefficient of determination (R(2)  = 0.99, 0.99, 0.92 and 0.77 for IP, k1 , k2 and PVIP , respectively). So, the RMSE and MAE values for IP were 7.49 and 4.92 in ANFIS model. However, they were to be 15.95 and 10.88 and 34.14 and 3.60 for the best MLP structure and MLR, respectively. So, MLR showed the minimum accuracy among the constructed models. Sensitivity analysis based on the ANFIS model suggested a high sensitivity of oxidation parameters, particularly the induction period on concentrations of gallic acid and methyl gallate due to their high antioxidant activity to retard oil oxidation and enhanced Kilka oil shelf life. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  4. Pansharpening by Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Giuseppe Masi

    2016-07-01

    Full Text Available A new pansharpening method is proposed, based on convolutional neural networks. We adapt a simple and effective three-layer architecture recently proposed for super-resolution to the pansharpening problem. Moreover, to improve performance without increasing complexity, we augment the input by including several maps of nonlinear radiometric indices typical of remote sensing. Experiments on three representative datasets show the proposed method to provide very promising results, largely competitive with the current state of the art in terms of both full-reference and no-reference metrics, and also at a visual inspection.

  5. Artificial Neural Networks·

    Indian Academy of Sciences (India)

    differences between biological neural networks (BNNs) of the brain and ANN s. A thorough understanding of ... neurons. Artificial neural models are loosely based on biology since a complete understanding of the .... A learning scheme for updating a neuron's connections (weights) was proposed by Donald Hebb in 1949.

  6. Memristor-based neural networks

    Science.gov (United States)

    Thomas, Andy

    2013-03-01

    The synapse is a crucial element in biological neural networks, but a simple electronic equivalent has been absent. This complicates the development of hardware that imitates biological architectures in the nervous system. Now, the recent progress in the experimental realization of memristive devices has renewed interest in artificial neural networks. The resistance of a memristive system depends on its past states and exactly this functionality can be used to mimic the synaptic connections in a (human) brain. After a short introduction to memristors, we present and explain the relevant mechanisms in a biological neural network, such as long-term potentiation and spike time-dependent plasticity, and determine the minimal requirements for an artificial neural network. We review the implementations of these processes using basic electric circuits and more complex mechanisms that either imitate biological systems or could act as a model system for them.

  7. Pansharpening by Convolutional Neural Networks

    National Research Council Canada - National Science Library

    Masi, Giuseppe; Cozzolino, Davide; Verdoliva, Luisa; Scarpa, Giuseppe

    2016-01-01

    A new pansharpening method is proposed, based on convolutional neural networks. We adapt a simple and effective three-layer architecture recently proposed for super-resolution to the pansharpening problem...

  8. What are artificial neural networks?

    DEFF Research Database (Denmark)

    Krogh, Anders

    2008-01-01

    Artificial neural networks have been applied to problems ranging from speech recognition to prediction of protein secondary structure, classification of cancers and gene prediction. How do they work and what might they be good for? Udgivelsesdato: 2008-Feb......Artificial neural networks have been applied to problems ranging from speech recognition to prediction of protein secondary structure, classification of cancers and gene prediction. How do they work and what might they be good for? Udgivelsesdato: 2008-Feb...

  9. Biologically Inspired Modular Neural Networks

    OpenAIRE

    Azam, Farooq

    2000-01-01

    This dissertation explores the modular learning in artificial neural networks that mainly driven by the inspiration from the neurobiological basis of the human learning. The presented modularization approaches to the neural network design and learning are inspired by the engineering, complexity, psychological and neurobiological aspects. The main theme of this dissertation is to explore the organization and functioning of the brain to discover new structural and learning ...

  10. Logarithmic learning for generalized classifier neural network.

    Science.gov (United States)

    Ozyildirim, Buse Melis; Avci, Mutlu

    2014-12-01

    Generalized classifier neural network is introduced as an efficient classifier among the others. Unless the initial smoothing parameter value is close to the optimal one, generalized classifier neural network suffers from convergence problem and requires quite a long time to converge. In this work, to overcome this problem, a logarithmic learning approach is proposed. The proposed method uses logarithmic cost function instead of squared error. Minimization of this cost function reduces the number of iterations used for reaching the minima. The proposed method is tested on 15 different data sets and performance of logarithmic learning generalized classifier neural network is compared with that of standard one. Thanks to operation range of radial basis function included by generalized classifier neural network, proposed logarithmic approach and its derivative has continuous values. This makes it possible to adopt the advantage of logarithmic fast convergence by the proposed learning method. Due to fast convergence ability of logarithmic cost function, training time is maximally decreased to 99.2%. In addition to decrease in training time, classification performance may also be improved till 60%. According to the test results, while the proposed method provides a solution for time requirement problem of generalized classifier neural network, it may also improve the classification accuracy. The proposed method can be considered as an efficient way for reducing the time requirement problem of generalized classifier neural network. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Fuzzy logic and neural networks basic concepts & application

    CERN Document Server

    Alavala, Chennakesava R

    2008-01-01

    About the Book: The primary purpose of this book is to provide the student with a comprehensive knowledge of basic concepts of fuzzy logic and neural networks. The hybridization of fuzzy logic and neural networks is also included. No previous knowledge of fuzzy logic and neural networks is required. Fuzzy logic and neural networks have been discussed in detail through illustrative examples, methods and generic applications. Extensive and carefully selected references is an invaluable resource for further study of fuzzy logic and neural networks. Each chapter is followed by a question bank

  12. Fractional Hopfield Neural Networks: Fractional Dynamic Associative Recurrent Neural Networks.

    Science.gov (United States)

    Pu, Yi-Fei; Yi, Zhang; Zhou, Ji-Liu

    2017-10-01

    This paper mainly discusses a novel conceptual framework: fractional Hopfield neural networks (FHNN). As is commonly known, fractional calculus has been incorporated into artificial neural networks, mainly because of its long-term memory and nonlocality. Some researchers have made interesting attempts at fractional neural networks and gained competitive advantages over integer-order neural networks. Therefore, it is naturally makes one ponder how to generalize the first-order Hopfield neural networks to the fractional-order ones, and how to implement FHNN by means of fractional calculus. We propose to introduce a novel mathematical method: fractional calculus to implement FHNN. First, we implement fractor in the form of an analog circuit. Second, we implement FHNN by utilizing fractor and the fractional steepest descent approach, construct its Lyapunov function, and further analyze its attractors. Third, we perform experiments to analyze the stability and convergence of FHNN, and further discuss its applications to the defense against chip cloning attacks for anticounterfeiting. The main contribution of our work is to propose FHNN in the form of an analog circuit by utilizing a fractor and the fractional steepest descent approach, construct its Lyapunov function, prove its Lyapunov stability, analyze its attractors, and apply FHNN to the defense against chip cloning attacks for anticounterfeiting. A significant advantage of FHNN is that its attractors essentially relate to the neuron's fractional order. FHNN possesses the fractional-order-stability and fractional-order-sensitivity characteristics.

  13. Spiking modular neural networks: A neural network modeling approach for hydrological processes

    National Research Council Canada - National Science Library

    Kamban Parasuraman; Amin Elshorbagy; Sean K. Carey

    2006-01-01

    .... In this study, a novel neural network model called the spiking modular neural networks (SMNNs) is proposed. An SMNN consists of an input layer, a spiking layer, and an associator neural network layer...

  14. Artificial astrocytes improve neural network performance.

    Science.gov (United States)

    Porto-Pazos, Ana B; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-04-19

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function.

  15. Artificial Neural Network Analysis of Xinhui Pericarpium Citri ...

    African Journals Online (AJOL)

    Purpose: To develop an effective analytical method to distinguish old peels of Xinhui Pericarpium citri reticulatae (XPCR) stored for > 3 years from new peels stored for < 3 years. Methods: Artificial neural networks (ANN) models, including general regression neural network (GRNN) and multi-layer feedforward neural ...

  16. A Projection Neural Network for Constrained Quadratic Minimax Optimization.

    Science.gov (United States)

    Liu, Qingshan; Wang, Jun

    2015-11-01

    This paper presents a projection neural network described by a dynamic system for solving constrained quadratic minimax programming problems. Sufficient conditions based on a linear matrix inequality are provided for global convergence of the proposed neural network. Compared with some of the existing neural networks for quadratic minimax optimization, the proposed neural network in this paper is capable of solving more general constrained quadratic minimax optimization problems, and the designed neural network does not include any parameter. Moreover, the neural network has lower model complexities, the number of state variables of which is equal to that of the dimension of the optimization problems. The simulation results on numerical examples are discussed to demonstrate the effectiveness and characteristics of the proposed neural network.

  17. Multigradient for Neural Networks for Equalizers

    Directory of Open Access Journals (Sweden)

    Chulhee Lee

    2003-06-01

    Full Text Available Recently, a new training algorithm, multigradient, has been published for neural networks and it is reported that the multigradient outperforms the backpropagation when neural networks are used as a classifier. When neural networks are used as an equalizer in communications, they can be viewed as a classifier. In this paper, we apply the multigradient algorithm to train the neural networks that are used as equalizers. Experiments show that the neural networks trained using the multigradient noticeably outperforms the neural networks trained by the backpropagation.

  18. Multiprocessor Neural Network in Healthcare.

    Science.gov (United States)

    Godó, Zoltán Attila; Kiss, Gábor; Kocsis, Dénes

    2015-01-01

    A possible way of creating a multiprocessor artificial neural network is by the use of microcontrollers. The RISC processors' high performance and the large number of I/O ports mean they are greatly suitable for creating such a system. During our research, we wanted to see if it is possible to efficiently create interaction between the artifical neural network and the natural nervous system. To achieve as much analogy to the living nervous system as possible, we created a frequency-modulated analog connection between the units. Our system is connected to the living nervous system through 128 microelectrodes. Two-way communication is provided through A/D transformation, which is even capable of testing psychopharmacons. The microcontroller-based analog artificial neural network can play a great role in medical singal processing, such as ECG, EEG etc.

  19. Neural network optimization, components, and design selection

    Science.gov (United States)

    Weller, Scott W.

    1990-07-01

    Neural Networks are part of a revived technology which has received a lot of hype in recent years. As is apt to happen in any hyped technology, jargon and predictions make its assimilation and application difficult. Nevertheless, Neural Networks have found use in a number of areas, working on non-trivial and noncontrived problems. For example, one net has been trained to "read", translating English text into phoneme sequences. Other applications of Neural Networks include data base manipulation and the solving of muting and classification types of optimization problems. Neural Networks are constructed from neurons, which in electronics or software attempt to model but are not constrained by the real thing, i.e., neurons in our gray matter. Neurons are simple processing units connected to many other neurons over pathways which modify the incoming signals. A single synthetic neuron typically sums its weighted inputs, runs this sum through a non-linear function, and produces an output. In the brain, neurons are connected in a complex topology: in hardware/software the topology is typically much simpler, with neurons lying side by side, forming layers of neurons which connect to the layer of neurons which receive their outputs. This simplistic model is much easier to construct than the real thing, and yet can solve real problems. The information in a network, or its "memory", is completely contained in the weights on the connections from one neuron to another. Establishing these weights is called "training" the network. Some networks are trained by design -- once constructed no further learning takes place. Other types of networks require iterative training once wired up, but are not trainable once taught Still other types of networks can continue to learn after initial construction. The main benefit to using Neural Networks is their ability to work with conflicting or incomplete ("fuzzy") data sets. This ability and its usefulness will become evident in the following

  20. Neutron spectrometry with artificial neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.; Rodriguez, J.M.; Mercado S, G.A. [Universidad Autonoma de Zacatecas, A.P. 336, 98000 Zacatecas (Mexico); Iniguez de la Torre Bayo, M.P. [Universidad de Valladolid, Valladolid (Spain); Barquero, R. [Hospital Universitario Rio Hortega, Valladolid (Spain); Arteaga A, T. [Envases de Zacatecas, S.A. de C.V., Zacatecas (Mexico)]. e-mail: rvega@cantera.reduaz.mx

    2005-07-01

    An artificial neural network has been designed to obtain the neutron spectra from the Bonner spheres spectrometer's count rates. The neural network was trained using 129 neutron spectra. These include isotopic neutron sources; reference and operational spectra from accelerators and nuclear reactors, spectra from mathematical functions as well as few energy groups and monoenergetic spectra. The spectra were transformed from lethargy to energy distribution and were re-bin ned to 31 energy groups using the MCNP 4C code. Re-binned spectra and UTA4 response matrix were used to calculate the expected count rates in Bonner spheres spectrometer. These count rates were used as input and the respective spectrum was used as output during neural network training. After training the network was tested with the Bonner spheres count rates produced by a set of neutron spectra. This set contains data used during network training as well as data not used. Training and testing was carried out in the Mat lab program. To verify the network unfolding performance the original and unfolded spectra were compared using the {chi}{sup 2}-test and the total fluence ratios. The use of Artificial Neural Networks to unfold neutron spectra in neutron spectrometry is an alternative procedure that overcomes the drawbacks associated in this ill-conditioned problem. (Author)

  1. Neutron spectrometry using artificial neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Vega-Carrillo, Hector Rene [Unidad Academica de Estudios Nucleares, Universidad Autonoma de Zacatecas, Apdo. Postal 336, 98000 Zacatecas, Zac. (Mexico)]|[Unidad Academica de Ing. Electrica, Universidad Autonoma de Zacatecas, Apdo. Postal 336, 98000 Zacatecas, Zac. (Mexico)]|[Unidad Academica de Matematicas, Universidad Autonoma de Zacatecas, Apdo. Postal 336, 98000 Zacatecas, Zac. (Mexico)]. E-mail: fermineutron@yahoo.com; Martin Hernandez-Davila, Victor [Unidad Academica de Estudios Nucleares, Universidad Autonoma de Zacatecas, Apdo. Postal 336, 98000 Zacatecas, Zac. (Mexico)]|[Unidad Academica de Ing. Electrica, Universidad Autonoma de Zacatecas, Apdo. Postal 336, 98000 Zacatecas, Zac. (Mexico); Manzanares-Acuna, Eduardo [Unidad Academica de Estudios Nucleares, Universidad Autonoma de Zacatecas, Apdo. Postal 336, 98000 Zacatecas, Zac. (Mexico); Mercado Sanchez, Gema A. [Unidad Academica de Matematicas, Universidad Autonoma de Zacatecas, Apdo. Postal 336, 98000 Zacatecas, Zac. (Mexico); Pilar Iniguez de la Torre, Maria [Depto. Fisica Teorica, Molecular y Nuclear, Universidad de Valladolid, Valladolid (Spain); Barquero, Raquel [Hospital Universitario Rio Hortega, Valladolid (Spain); Palacios, Francisco; Mendez Villafane, Roberto [Depto. Fisica Teorica, Molecular y Nuclear, Universidad de Valladolid, Valladolid (Spain)]|[Universidad Europea Miguel de Cervantes, C. Padre Julio Chevalier No. 2, 47012 Valladolid (Spain); Arteaga Arteaga, Tarcicio [Unidad Academica de Estudios Nucleares, Universidad Autonoma de Zacatecas, Apdo. Postal 336, 98000 Zacatecas, Zac. (Mexico)]|[Envases de Zacatecas, SA de CV, Parque Industrial de Calera de Victor Rosales, Zac. (Mexico); Manuel Ortiz Rodriguez, Jose [Unidad Academica de Estudios Nucleares, Universidad Autonoma de Zacatecas, Apdo. Postal 336, 98000 Zacatecas, Zac. (Mexico)]|[Unidad Academica de Ing. Electrica, Universidad Autonoma de Zacatecas, Apdo. Postal 336, 98000 Zacatecas, Zac. (Mexico)

    2006-04-15

    An artificial neural network has been designed to obtain neutron spectra from Bonner spheres spectrometer count rates. The neural network was trained using 129 neutron spectra. These include spectra from isotopic neutron sources; reference and operational spectra from accelerators and nuclear reactors, spectra based on mathematical functions as well as few energy groups and monoenergetic spectra. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. The re-binned spectra and the UTA4 response matrix were used to calculate the expected count rates in Bonner spheres spectrometer. These count rates were used as input and their respective spectra were used as output during the neural network training. After training, the network was tested with the Bonner spheres count rates produced by folding a set of neutron spectra with the response matrix. This set contains data used during network training as well as data not used. Training and testing was carried out using the Matlab{sup (R)} program. To verify the network unfolding performance, the original and unfolded spectra were compared using the root mean square error. The use of artificial neural networks to unfold neutron spectra in neutron spectrometry is an alternative procedure that overcomes the drawbacks associated with this ill-conditioned problem.

  2. Tampa Electric Neural Network Sootblowing

    Energy Technology Data Exchange (ETDEWEB)

    Mark A. Rhode

    2003-12-31

    Boiler combustion dynamics change continuously due to several factors including coal quality, boiler loading, ambient conditions, changes in slag/soot deposits and the condition of plant equipment. NO{sub x} formation, Particulate Matter (PM) emissions, and boiler thermal performance are directly affected by the sootblowing practices on a unit. As part of its Power Plant Improvement Initiative program, the US DOE is providing cofunding (DE-FC26-02NT41425) and NETL is the managing agency for this project at Tampa Electric's Big Bend Station. This program serves to co-fund projects that have the potential to increase thermal efficiency and reduce emissions from coal-fired utility boilers. A review of the Big Bend units helped identify intelligent sootblowing as a suitable application to achieve the desired objectives. The existing sootblower control philosophy uses sequential schemes, whose frequency is either dictated by the control room operator or is timed based. The intent of this project is to implement a neural network based intelligent soot-blowing system, in conjunction with state-of-the-art controls and instrumentation, to optimize the operation of a utility boiler and systematically control boiler fouling. Utilizing unique, on-line, adaptive technology, operation of the sootblowers can be dynamically controlled based on real-time events and conditions within the boiler. This could be an extremely cost-effective technology, which has the ability to be readily and easily adapted to virtually any pulverized coal fired boiler. Through unique on-line adaptive technology, Neural Network-based systems optimize the boiler operation by accommodating equipment performance changes due to wear and maintenance activities, adjusting to fluctuations in fuel quality, and improving operating flexibility. The system dynamically adjusts combustion setpoints and bias settings in closed-loop supervisory control to simultaneously reduce NO{sub x} emissions and improve heat

  3. Tampa Electric Neural Network Sootblowing

    Energy Technology Data Exchange (ETDEWEB)

    Mark A. Rhode

    2004-09-30

    Boiler combustion dynamics change continuously due to several factors including coal quality, boiler loading, ambient conditions, changes in slag/soot deposits and the condition of plant equipment. NOx formation, Particulate Matter (PM) emissions, and boiler thermal performance are directly affected by the sootblowing practices on a unit. As part of its Power Plant Improvement Initiative program, the US DOE is providing cofunding (DE-FC26-02NT41425) and NETL is the managing agency for this project at Tampa Electric's Big Bend Station. This program serves to co-fund projects that have the potential to increase thermal efficiency and reduce emissions from coal-fired utility boilers. A review of the Big Bend units helped identify intelligent sootblowing as a suitable application to achieve the desired objectives. The existing sootblower control philosophy uses sequential schemes, whose frequency is either dictated by the control room operator or is timed based. The intent of this project is to implement a neural network based intelligent sootblowing system, in conjunction with state-of-the-art controls and instrumentation, to optimize the operation of a utility boiler and systematically control boiler fouling. Utilizing unique, on-line, adaptive technology, operation of the sootblowers can be dynamically controlled based on real-time events and conditions within the boiler. This could be an extremely cost-effective technology, which has the ability to be readily and easily adapted to virtually any pulverized coal fired boiler. Through unique on-line adaptive technology, Neural Network-based systems optimize the boiler operation by accommodating equipment performance changes due to wear and maintenance activities, adjusting to fluctuations in fuel quality, and improving operating flexibility. The system dynamically adjusts combustion setpoints and bias settings in closed-loop supervisory control to simultaneously reduce NO{sub x} emissions and improve heat rate

  4. Tampa Electric Neural Network Sootblowing

    Energy Technology Data Exchange (ETDEWEB)

    Mark A. Rhode

    2004-03-31

    Boiler combustion dynamics change continuously due to several factors including coal quality, boiler loading, ambient conditions, changes in slag/soot deposits and the condition of plant equipment. NOx formation, Particulate Matter (PM) emissions, and boiler thermal performance are directly affected by the sootblowing practices on a unit. As part of its Power Plant Improvement Initiative program, the US DOE is providing co-funding (DE-FC26-02NT41425) and NETL is the managing agency for this project at Tampa Electric's Big Bend Station. This program serves to co-fund projects that have the potential to increase thermal efficiency and reduce emissions from coal-fired utility boilers. A review of the Big Bend units helped identify intelligent sootblowing as a suitable application to achieve the desired objectives. The existing sootblower control philosophy uses sequential schemes, whose frequency is either dictated by the control room operator or is timed based. The intent of this project is to implement a neural network based intelligent sootblowing system, in conjunction with state-of-the-art controls and instrumentation, to optimize the operation of a utility boiler and systematically control boiler fouling. Utilizing unique, on-line, adaptive technology, operation of the sootblowers can be dynamically controlled based on real-time events and conditions within the boiler. This could be an extremely cost-effective technology, which has the ability to be readily and easily adapted to virtually any pulverized coal fired boiler. Through unique on-line adaptive technology, Neural Network-based systems optimize the boiler operation by accommodating equipment performance changes due to wear and maintenance activities, adjusting to fluctuations in fuel quality, and improving operating flexibility. The system dynamically adjusts combustion setpoints and bias settings in closed-loop supervisory control to simultaneously reduce NO{sub x} emissions and improve heat rate

  5. Generalization performance of regularized neural network models

    DEFF Research Database (Denmark)

    Larsen, Jan; Hansen, Lars Kai

    1994-01-01

    Architecture optimization is a fundamental problem of neural network modeling. The optimal architecture is defined as the one which minimizes the generalization error. This paper addresses estimation of the generalization performance of regularized, complete neural network models. Regularization...

  6. voltage compensation using artificial neural network

    African Journals Online (AJOL)

    Offor Theophilos

    VOLTAGE COMPENSATION USING ARTIFICIAL NEURAL NETWORK: A CASE STUDY OF. RUMUOLA ... using artificial neural network (ANN) controller based dynamic voltage restorer (DVR). ... substation by simulating with sample of average voltage for Omerelu, Waterlines, Rumuola, Shell Industrial and Barracks.

  7. Plant Growth Models Using Artificial Neural Networks

    Science.gov (United States)

    Bubenheim, David

    1997-01-01

    In this paper, we descrive our motivation and approach to devloping models and the neural network architecture. Initial use of the artificial neural network for modeling the single plant process of transpiration is presented.

  8. Automatic identification of species with neural networks.

    Science.gov (United States)

    Hernández-Serna, Andrés; Jiménez-Segura, Luz Fernanda

    2014-01-01

    A new automatic identification system using photographic images has been designed to recognize fish, plant, and butterfly species from Europe and South America. The automatic classification system integrates multiple image processing tools to extract the geometry, morphology, and texture of the images. Artificial neural networks (ANNs) were used as the pattern recognition method. We tested a data set that included 740 species and 11,198 individuals. Our results show that the system performed with high accuracy, reaching 91.65% of true positive fish identifications, 92.87% of plants and 93.25% of butterflies. Our results highlight how the neural networks are complementary to species identification.

  9. Automatic identification of species with neural networks

    Directory of Open Access Journals (Sweden)

    Andrés Hernández-Serna

    2014-11-01

    Full Text Available A new automatic identification system using photographic images has been designed to recognize fish, plant, and butterfly species from Europe and South America. The automatic classification system integrates multiple image processing tools to extract the geometry, morphology, and texture of the images. Artificial neural networks (ANNs were used as the pattern recognition method. We tested a data set that included 740 species and 11,198 individuals. Our results show that the system performed with high accuracy, reaching 91.65% of true positive fish identifications, 92.87% of plants and 93.25% of butterflies. Our results highlight how the neural networks are complementary to species identification.

  10. Optoelectronic Implementation of Neural Networks

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 3; Issue 9. Optoelectronic Implementation of Neural Networks - Use of Optics in Computing. R Ramachandran. General Article Volume 3 Issue 9 September 1998 pp 45-55. Fulltext. Click here to view fulltext PDF. Permanent link:

  11. Aphasia Classification Using Neural Networks

    DEFF Research Database (Denmark)

    Axer, H.; Jantzen, Jan; Berks, G.

    2000-01-01

    A web-based software model (http://fuzzy.iau.dtu.dk/aphasia.nsf) was developed as an example for classification of aphasia using neural networks. Two multilayer perceptrons were used to classify the type of aphasia (Broca, Wernicke, anomic, global) according to the results in some subtests...

  12. Neural networks for sign language translation

    Science.gov (United States)

    Wilson, Beth J.; Anspach, Gretel

    1993-09-01

    A neural network is used to extract relevant features of sign language from video images of a person communicating in American Sign Language or Signed English. The key features are hand motion, hand location with respect to the body, and handshape. A modular hybrid design is under way to apply various techniques, including neural networks, in the development of a translation system that will facilitate communication between deaf and hearing people. One of the neural networks described here is used to classify video images of handshapes into their linguistic counterpart in American Sign Language. The video image is preprocessed to yield Fourier descriptors that encode the shape of the hand silhouette. These descriptors are then used as inputs to a neural network that classifies their shapes. The network is trained with various examples from different signers and is tested with new images from new signers. The results have shown that for coarse handshape classes, the network is invariant to the type of camera used to film the various signers and to the segmentation technique.

  13. Analysis of neural networks through base functions

    NARCIS (Netherlands)

    van der Zwaag, B.J.; Slump, Cornelis H.; Spaanenburg, L.

    Problem statement. Despite their success-story, neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a "magic tool" but possibly even more

  14. Simplified LQG Control with Neural Networks

    DEFF Research Database (Denmark)

    Sørensen, O.

    1997-01-01

    A new neural network application for non-linear state control is described. One neural network is modelled to form a Kalmann predictor and trained to act as an optimal state observer for a non-linear process. Another neural network is modelled to form a state controller and trained to produce...

  15. Novel quantum inspired binary neural network algorithm

    Indian Academy of Sciences (India)

    In this paper, a quantum based binary neural network algorithm is proposed, named as novel quantum binary neural network algorithm (NQ-BNN). It forms a neural network structure by deciding weights and separability parameter in quantum based manner. Quantum computing concept represents solution probabilistically ...

  16. Neural Networks for Modeling and Control of Particle Accelerators

    CERN Document Server

    Edelen, A.L.; Chase, B.E.; Edstrom, D.; Milton, S.V.; Stabile, P.

    2016-01-01

    We describe some of the challenges of particle accelerator control, highlight recent advances in neural network techniques, discuss some promising avenues for incorporating neural networks into particle accelerator control systems, and describe a neural network-based control system that is being developed for resonance control of an RF electron gun at the Fermilab Accelerator Science and Technology (FAST) facility, including initial experimental results from a benchmark controller.

  17. Implementing Signature Neural Networks with Spiking Neurons.

    Science.gov (United States)

    Carrillo-Medina, José Luis; Latorre, Roberto

    2016-01-01

    Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm-i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data-to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the absence

  18. Implementing Signature Neural Networks with Spiking Neurons

    Science.gov (United States)

    Carrillo-Medina, José Luis; Latorre, Roberto

    2016-01-01

    Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm—i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data—to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the

  19. Neural networks of human nature and nurture

    Directory of Open Access Journals (Sweden)

    Daniel S. Levine

    2009-11-01

    Full Text Available Neural network methods have facilitated the unification of several unfortunate splits in psychology, including nature versus nurture. We review the contributions of this methodology and then discuss tentative network theories of caring behavior, of uncaring behavior, and of how the frontal lobes are involved in the choices between them. The implications of our theory are optimistic about the prospects of society to encourage the human potential for caring.

  20. Neural network approaches for noisy language modeling.

    Science.gov (United States)

    Li, Jun; Ouazzane, Karim; Kazemian, Hassan B; Afzal, Muhammad Sajid

    2013-11-01

    Text entry from people is not only grammatical and distinct, but also noisy. For example, a user's typing stream contains all the information about the user's interaction with computer using a QWERTY keyboard, which may include the user's typing mistakes as well as specific vocabulary, typing habit, and typing performance. In particular, these features are obvious in disabled users' typing streams. This paper proposes a new concept called noisy language modeling by further developing information theory and applies neural networks to one of its specific application-typing stream. This paper experimentally uses a neural network approach to analyze the disabled users' typing streams both in general and specific ways to identify their typing behaviors and subsequently, to make typing predictions and typing corrections. In this paper, a focused time-delay neural network (FTDNN) language model, a time gap model, a prediction model based on time gap, and a probabilistic neural network model (PNN) are developed. A 38% first hitting rate (HR) and a 53% first three HR in symbol prediction are obtained based on the analysis of a user's typing history through the FTDNN language modeling, while the modeling results using the time gap prediction model and the PNN model demonstrate that the correction rates lie predominantly in between 65% and 90% with the current testing samples, and 70% of all test scores above basic correction rates, respectively. The modeling process demonstrates that a neural network is a suitable and robust language modeling tool to analyze the noisy language stream. The research also paves the way for practical application development in areas such as informational analysis, text prediction, and error correction by providing a theoretical basis of neural network approaches for noisy language modeling.

  1. Dynamic properties of cellular neural networks

    Directory of Open Access Journals (Sweden)

    Angela Slavova

    1993-01-01

    Full Text Available Dynamic behavior of a new class of information-processing systems called Cellular Neural Networks is investigated. In this paper we introduce a small parameter in the state equation of a cellular neural network and we seek for periodic phenomena. New approach is used for proving stability of a cellular neural network by constructing Lyapunov's majorizing equations. This algorithm is helpful for finding a map from initial continuous state space of a cellular neural network into discrete output. A comparison between cellular neural networks and cellular automata is made.

  2. Neural networks: Application to medical imaging

    Science.gov (United States)

    Clarke, Laurence P.

    1994-01-01

    The research mission is the development of computer assisted diagnostic (CAD) methods for improved diagnosis of medical images including digital x-ray sensors and tomographic imaging modalities. The CAD algorithms include advanced methods for adaptive nonlinear filters for image noise suppression, hybrid wavelet methods for feature segmentation and enhancement, and high convergence neural networks for feature detection and VLSI implementation of neural networks for real time analysis. Other missions include (1) implementation of CAD methods on hospital based picture archiving computer systems (PACS) and information networks for central and remote diagnosis and (2) collaboration with defense and medical industry, NASA, and federal laboratories in the area of dual use technology conversion from defense or aerospace to medicine.

  3. Fuzzy logic and neural network technologies

    Science.gov (United States)

    Villarreal, James A.; Lea, Robert N.; Savely, Robert T.

    1992-01-01

    Applications of fuzzy logic technologies in NASA projects are reviewed to examine their advantages in the development of neural networks for aerospace and commercial expert systems and control. Examples of fuzzy-logic applications include a 6-DOF spacecraft controller, collision-avoidance systems, and reinforcement-learning techniques. The commercial applications examined include a fuzzy autofocusing system, an air conditioning system, and an automobile transmission application. The practical use of fuzzy logic is set in the theoretical context of artificial neural systems (ANSs) to give the background for an overview of ANS research programs at NASA. The research and application programs include the Network Execution and Training Simulator and faster training algorithms such as the Difference Optimized Training Scheme. The networks are well suited for pattern-recognition applications such as predicting sunspots, controlling posture maintenance, and conducting adaptive diagnoses.

  4. Optical-Correlator Neural Network Based On Neocognitron

    Science.gov (United States)

    Chao, Tien-Hsin; Stoner, William W.

    1994-01-01

    Multichannel optical correlator implements shift-invariant, high-discrimination pattern-recognizing neural network based on paradigm of neocognitron. Selected as basic building block of this neural network because invariance under shifts is inherent advantage of Fourier optics included in optical correlators in general. Neocognitron is conceptual electronic neural-network model for recognition of visual patterns. Multilayer processing achieved by iteratively feeding back output of feature correlator to input spatial light modulator and updating Fourier filters. Neural network trained by use of characteristic features extracted from target images. Multichannel implementation enables parallel processing of large number of selected features.

  5. Neural network models: Insights and prescriptions from practical applications

    Energy Technology Data Exchange (ETDEWEB)

    Samad, T. [Honeywell Technology Center, Minneapolis, MN (United States)

    1995-12-31

    Neural networks are no longer just a research topic; numerous applications are now testament to their practical utility. In the course of developing these applications, researchers and practitioners have been faced with a variety of issues. This paper briefly discusses several of these, noting in particular the rich connections between neural networks and other, more conventional technologies. A more comprehensive version of this paper is under preparation that will include illustrations on real examples. Neural networks are being applied in several different ways. Our focus here is on neural networks as modeling technology. However, much of the discussion is also relevant to other types of applications such as classification, control, and optimization.

  6. Liquefaction Microzonation of Babol City Using Artificial Neural Network

    DEFF Research Database (Denmark)

    Farrokhzad, F.; Choobbasti, A.J.; Barari, Amin

    2012-01-01

    that will be less susceptible to damage during earthquakes. The scope of present study is to prepare the liquefaction microzonation map for the Babol city based on Seed and Idriss (1983) method using artificial neural network. Artificial neural network (ANN) is one of the artificial intelligence (AI) approaches...... is proposed in this paper. To meet this objective, an effort is made to introduce a total of 30 boreholes data in an area of 7 km2 which includes the results of field tests into the neural network model and the prediction of artificial neural network is checked in some test boreholes, finally the liquefaction...

  7. Neural Networks Methodology and Applications

    CERN Document Server

    Dreyfus, Gérard

    2005-01-01

    Neural networks represent a powerful data processing technique that has reached maturity and broad application. When clearly understood and appropriately used, they are a mandatory component in the toolbox of any engineer who wants make the best use of the available data, in order to build models, make predictions, mine data, recognize shapes or signals, etc. Ranging from theoretical foundations to real-life applications, this book is intended to provide engineers and researchers with clear methodologies for taking advantage of neural networks in industrial, financial or banking applications, many instances of which are presented in the book. For the benefit of readers wishing to gain deeper knowledge of the topics, the book features appendices that provide theoretical details for greater insight, and algorithmic details for efficient programming and implementation. The chapters have been written by experts ands seemlessly edited to present a coherent and comprehensive, yet not redundant, practically-oriented...

  8. Computational capabilities of graph neural networks.

    Science.gov (United States)

    Scarselli, Franco; Gori, Marco; Tsoi, Ah Chung; Hagenbuchner, Markus; Monfardini, Gabriele

    2009-01-01

    In this paper, we will consider the approximation properties of a recently introduced neural network model called graph neural network (GNN), which can be used to process-structured data inputs, e.g., acyclic graphs, cyclic graphs, and directed or undirected graphs. This class of neural networks implements a function tau(G,n) is an element of IR(m) that maps a graph G and one of its nodes n onto an m-dimensional Euclidean space. We characterize the functions that can be approximated by GNNs, in probability, up to any prescribed degree of precision. This set contains the maps that satisfy a property called preservation of the unfolding equivalence, and includes most of the practically useful functions on graphs; the only known exception is when the input graph contains particular patterns of symmetries when unfolding equivalence may not be preserved. The result can be considered an extension of the universal approximation property established for the classic feedforward neural networks (FNNs). Some experimental examples are used to show the computational capabilities of the proposed model.

  9. Artificial neural networks in neutron dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.; Mercado, G.A.; Perales M, W.A.; Robles R, J.A. [Unidades Academicas de Estudios Nucleares, UAZ, A.P. 336, 98000 Zacatecas (Mexico); Gallego, E.; Lorente, A. [Depto. de Ingenieria Nuclear, Universidad Politecnica de Madrid, (Spain)

    2005-07-01

    An artificial neural network has been designed to obtain the neutron doses using only the Bonner spheres spectrometer's count rates. Ambient, personal and effective neutron doses were included. 187 neutron spectra were utilized to calculate the Bonner count rates and the neutron doses. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. Re-binned spectra, UTA4 response matrix and fluence-to-dose coefficients were used to calculate the count rates in Bonner spheres spectrometer and the doses. Count rates were used as input and the respective doses were used as output during neural network training. Training and testing was carried out in Mat lab environment. The artificial neural network performance was evaluated using the {chi}{sup 2}- test, where the original and calculated doses were compared. The use of Artificial Neural Networks in neutron dosimetry is an alternative procedure that overcomes the drawbacks associated in this ill-conditioned problem. (Author)

  10. Neural Networks in R Using the Stuttgart Neural Network Simulator: RSNNS

    Directory of Open Access Journals (Sweden)

    Christopher Bergmeir

    2012-01-01

    Full Text Available Neural networks are important standard machine learning procedures for classification and regression. We describe the R package RSNNS that provides a convenient interface to the popular Stuttgart Neural Network Simulator SNNS. The main features are (a encapsulation of the relevant SNNS parts in a C++ class, for sequential and parallel usage of different networks, (b accessibility of all of the SNNSalgorithmic functionality from R using a low-level interface, and (c a high-level interface for convenient, R-style usage of many standard neural network procedures. The package also includes functions for visualization and analysis of the models and the training procedures, as well as functions for data input/output from/to the original SNNSfile formats.

  11. Foetal ECG recovery using dynamic neural networks.

    Science.gov (United States)

    Camps-Valls, Gustavo; Martínez-Sober, Marcelino; Soria-Olivas, Emilio; Magdalena-Benedito, Rafael; Calpe-Maravilla, Javier; Guerrero-Martínez, Juan

    2004-07-01

    Non-invasive electrocardiography has proven to be a very interesting method for obtaining information about the foetus state and thus to assure its well-being during pregnancy. One of the main applications in this field is foetal electrocardiogram (ECG) recovery by means of automatic methods. Evident problems found in the literature are the limited number of available registers, the lack of performance indicators, and the limited use of non-linear adaptive methods. In order to circumvent these problems, we first introduce the generation of synthetic registers and discuss the influence of different kinds of noise to the modelling. Second, a method which is based on numerical (correlation coefficient) and statistical (analysis of variance, ANOVA) measures allows us to select the best recovery model. Finally, finite impulse response (FIR) and gamma neural networks are included in the adaptive noise cancellation (ANC) scheme in order to provide highly non-linear, dynamic capabilities to the recovery model. Neural networks are benchmarked with classical adaptive methods such as the least mean squares (LMS) and the normalized LMS (NLMS) algorithms in simulated and real registers and some conclusions are drawn. For synthetic registers, the most determinant factor in the identification of the models is the foetal-maternal signal-to-noise ratio (SNR). In addition, as the electromyogram contribution becomes more relevant, neural networks clearly outperform the LMS-based algorithm. From the ANOVA test, we found statistical differences between LMS-based models and neural models when complex situations (high foetal-maternal and foetal-noise SNRs) were present. These conclusions were confirmed after doing robustness tests on synthetic registers, visual inspection of the recovered signals and calculation of the recognition rates of foetal R-peaks for real situations. Finally, the best compromise between model complexity and outcomes was provided by the FIR neural network. Both

  12. MEMBRAIN NEURAL NETWORK FOR VISUAL PATTERN RECOGNITION

    Directory of Open Access Journals (Sweden)

    Artur Popko

    2013-06-01

    Full Text Available Recognition of visual patterns is one of significant applications of Artificial Neural Networks, which partially emulate human thinking in the domain of artificial intelligence. In the paper, a simplified neural approach to recognition of visual patterns is portrayed and discussed. This paper is dedicated for investigators in visual patterns recognition, Artificial Neural Networking and related disciplines. The document describes also MemBrain application environment as a powerful and easy to use neural networks’ editor and simulator supporting ANN.

  13. Hybrid discrete-time neural networks.

    Science.gov (United States)

    Cao, Hongjun; Ibarz, Borja

    2010-11-13

    Hybrid dynamical systems combine evolution equations with state transitions. When the evolution equations are discrete-time (also called map-based), the result is a hybrid discrete-time system. A class of biological neural network models that has recently received some attention falls within this category: map-based neuron models connected by means of fast threshold modulation (FTM). FTM is a connection scheme that aims to mimic the switching dynamics of a neuron subject to synaptic inputs. The dynamic equations of the neuron adopt different forms according to the state (either firing or not firing) and type (excitatory or inhibitory) of their presynaptic neighbours. Therefore, the mathematical model of one such network is a combination of discrete-time evolution equations with transitions between states, constituting a hybrid discrete-time (map-based) neural network. In this paper, we review previous work within the context of these models, exemplifying useful techniques to analyse them. Typical map-based neuron models are low-dimensional and amenable to phase-plane analysis. In bursting models, fast-slow decomposition can be used to reduce dimensionality further, so that the dynamics of a pair of connected neurons can be easily understood. We also discuss a model that includes electrical synapses in addition to chemical synapses with FTM. Furthermore, we describe how master stability functions can predict the stability of synchronized states in these networks. The main results are extended to larger map-based neural networks.

  14. Automated Modeling of Microwave Structures by Enhanced Neural Networks

    Directory of Open Access Journals (Sweden)

    Z. Raida

    2006-12-01

    Full Text Available The paper describes the methodology of the automated creation of neural models of microwave structures. During the creation process, artificial neural networks are trained using the combination of the particle swarm optimization and the quasi-Newton method to avoid critical training problems of the conventional neural nets. In the paper, neural networks are used to approximate the behavior of a planar microwave filter (moment method, Zeland IE3D. In order to evaluate the efficiency of neural modeling, global optimizations are performed using numerical models and neural ones. Both approaches are compared from the viewpoint of CPU-time demands and the accuracy. Considering conclusions, methodological recommendations for including neural networks to the microwave design are formulated.

  15. Satellite image analysis using neural networks

    Science.gov (United States)

    Sheldon, Roger A.

    1990-01-01

    The tremendous backlog of unanalyzed satellite data necessitates the development of improved methods for data cataloging and analysis. Ford Aerospace has developed an image analysis system, SIANN (Satellite Image Analysis using Neural Networks) that integrates the technologies necessary to satisfy NASA's science data analysis requirements for the next generation of satellites. SIANN will enable scientists to train a neural network to recognize image data containing scenes of interest and then rapidly search data archives for all such images. The approach combines conventional image processing technology with recent advances in neural networks to provide improved classification capabilities. SIANN allows users to proceed through a four step process of image classification: filtering and enhancement, creation of neural network training data via application of feature extraction algorithms, configuring and training a neural network model, and classification of images by application of the trained neural network. A prototype experimentation testbed was completed and applied to climatological data.

  16. Fuzzy neural networks: theory and applications

    Science.gov (United States)

    Gupta, Madan M.

    1994-10-01

    During recent years, significant advances have been made in two distinct technological areas: fuzzy logic and computational neural networks. The theory of fuzzy logic provides a mathematical framework to capture the uncertainties associated with human cognitive processes, such as thinking and reasoning. It also provides a mathematical morphology to emulate certain perceptual and linguistic attributes associated with human cognition. On the other hand, the computational neural network paradigms have evolved in the process of understanding the incredible learning and adaptive features of neuronal mechanisms inherent in certain biological species. Computational neural networks replicate, on a small scale, some of the computational operations observed in biological learning and adaptation. The integration of these two fields, fuzzy logic and neural networks, have given birth to an emerging technological field -- fuzzy neural networks. Fuzzy neural networks, have the potential to capture the benefits of these two fascinating fields, fuzzy logic and neural networks, into a single framework. The intent of this tutorial paper is to describe the basic notions of biological and computational neuronal morphologies, and to describe the principles and architectures of fuzzy neural networks. Towards this goal, we develop a fuzzy neural architecture based upon the notion of T-norm and T-conorm connectives. An error-based learning scheme is described for this neural structure.

  17. Pediatric Nutritional Requirements Determination with Neural Networks

    OpenAIRE

    Karlık, Bekir; Ece, Aydın

    1998-01-01

    To calculate daily nutritional requirements of children, a computer program has been developed based upon neural network. Three parameters, daily protein, energy and water requirements, were calculated through trained artificial neural networks using a database of 312 children The results were compared with those of calculated from dietary requirements tables of World Health Organisation. No significant difference was found between two calculations. In conclusion, a simple neural network may ...

  18. Adaptive optimization and control using neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Mead, W.C.; Brown, S.K.; Jones, R.D.; Bowling, P.S.; Barnes, C.W.

    1993-10-22

    Recent work has demonstrated the ability of neural-network-based controllers to optimize and control machines with complex, non-linear, relatively unknown control spaces. We present a brief overview of neural networks via a taxonomy illustrating some capabilities of different kinds of neural networks. We present some successful control examples, particularly the optimization and control of a small-angle negative ion source.

  19. Character Recognition Using Genetically Trained Neural Networks

    Energy Technology Data Exchange (ETDEWEB)

    Diniz, C.; Stantz, K.M.; Trahan, M.W.; Wagner, J.S.

    1998-10-01

    Computationally intelligent recognition of characters and symbols addresses a wide range of applications including foreign language translation and chemical formula identification. The combination of intelligent learning and optimization algorithms with layered neural structures offers powerful techniques for character recognition. These techniques were originally developed by Sandia National Laboratories for pattern and spectral analysis; however, their ability to optimize vast amounts of data make them ideal for character recognition. An adaptation of the Neural Network Designer soflsvare allows the user to create a neural network (NN_) trained by a genetic algorithm (GA) that correctly identifies multiple distinct characters. The initial successfid recognition of standard capital letters can be expanded to include chemical and mathematical symbols and alphabets of foreign languages, especially Arabic and Chinese. The FIN model constructed for this project uses a three layer feed-forward architecture. To facilitate the input of characters and symbols, a graphic user interface (GUI) has been developed to convert the traditional representation of each character or symbol to a bitmap. The 8 x 8 bitmap representations used for these tests are mapped onto the input nodes of the feed-forward neural network (FFNN) in a one-to-one correspondence. The input nodes feed forward into a hidden layer, and the hidden layer feeds into five output nodes correlated to possible character outcomes. During the training period the GA optimizes the weights of the NN until it can successfully recognize distinct characters. Systematic deviations from the base design test the network's range of applicability. Increasing capacity, the number of letters to be recognized, requires a nonlinear increase in the number of hidden layer neurodes. Optimal character recognition performance necessitates a minimum threshold for the number of cases when genetically training the net. And, the

  20. Deep Learning in Neural Networks: An Overview

    OpenAIRE

    Schmidhuber, Juergen

    2014-01-01

    In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarises relevant work, much of it from the previous millennium. Shallow and deep learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpr...

  1. Bayesian regularization of neural networks.

    Science.gov (United States)

    Burden, Frank; Winkler, Dave

    2008-01-01

    Bayesian regularized artificial neural networks (BRANNs) are more robust than standard back-propagation nets and can reduce or eliminate the need for lengthy cross-validation. Bayesian regularization is a mathematical process that converts a nonlinear regression into a "well-posed" statistical problem in the manner of a ridge regression. The advantage of BRANNs is that the models are robust and the validation process, which scales as O(N2) in normal regression methods, such as back propagation, is unnecessary. These networks provide solutions to a number of problems that arise in QSAR modeling, such as choice of model, robustness of model, choice of validation set, size of validation effort, and optimization of network architecture. They are difficult to overtrain, since evidence procedures provide an objective Bayesian criterion for stopping training. They are also difficult to overfit, because the BRANN calculates and trains on a number of effective network parameters or weights, effectively turning off those that are not relevant. This effective number is usually considerably smaller than the number of weights in a standard fully connected back-propagation neural net. Automatic relevance determination (ARD) of the input variables can be used with BRANNs, and this allows the network to "estimate" the importance of each input. The ARD method ensures that irrelevant or highly correlated indices used in the modeling are neglected as well as showing which are the most important variables for modeling the activity data. This chapter outlines the equations that define the BRANN method plus a flowchart for producing a BRANN-QSAR model. Some results of the use of BRANNs on a number of data sets are illustrated and compared with other linear and nonlinear models.

  2. Neural networks for nuclear spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Keller, P.E.; Kangas, L.J.; Hashem, S.; Kouzes, R.T. [Pacific Northwest Lab., Richland, WA (United States)] [and others

    1995-12-31

    In this paper two applications of artificial neural networks (ANNs) in nuclear spectroscopy analysis are discussed. In the first application, an ANN assigns quality coefficients to alpha particle energy spectra. These spectra are used to detect plutonium contamination in the work environment. The quality coefficients represent the levels of spectral degradation caused by miscalibration and foreign matter affecting the instruments. A set of spectra was labeled with quality coefficients by an expert and used to train the ANN expert system. Our investigation shows that the expert knowledge of spectral quality can be transferred to an ANN system. The second application combines a portable gamma-ray spectrometer with an ANN. In this system the ANN is used to automatically identify, radioactive isotopes in real-time from their gamma-ray spectra. Two neural network paradigms are examined: the linear perception and the optimal linear associative memory (OLAM). A comparison of the two paradigms shows that OLAM is superior to linear perception for this application. Both networks have a linear response and are useful in determining the composition of an unknown sample when the spectrum of the unknown is a linear superposition of known spectra. One feature of this technique is that it uses the whole spectrum in the identification process instead of only the individual photo-peaks. For this reason, it is potentially more useful for processing data from lower resolution gamma-ray spectrometers. This approach has been tested with data generated by Monte Carlo simulations and with field data from sodium iodide and Germanium detectors. With the ANN approach, the intense computation takes place during the training process. Once the network is trained, normal operation consists of propagating the data through the network, which results in rapid identification of samples. This approach is useful in situations that require fast response where precise quantification is less important.

  3. Neural network based system for equipment surveillance

    Science.gov (United States)

    Vilim, R.B.; Gross, K.C.; Wegerich, S.W.

    1998-04-28

    A method and system are disclosed for performing surveillance of transient signals of an industrial device to ascertain the operating state. The method and system involves the steps of reading into a memory training data, determining neural network weighting values until achieving target outputs close to the neural network output. If the target outputs are inadequate, wavelet parameters are determined to yield neural network outputs close to the desired set of target outputs and then providing signals characteristic of an industrial process and comparing the neural network output to the industrial process signals to evaluate the operating state of the industrial process. 33 figs.

  4. Fuzzy neural network theory and application

    CERN Document Server

    Liu, Puyin

    2004-01-01

    This book systematically synthesizes research achievements in the field of fuzzy neural networks in recent years. It also provides a comprehensive presentation of the developments in fuzzy neural networks, with regard to theory as well as their application to system modeling and image restoration. Special emphasis is placed on the fundamental concepts and architecture analysis of fuzzy neural networks. The book is unique in treating all kinds of fuzzy neural networks and their learning algorithms and universal approximations, and employing simulation examples which are carefully designed to he

  5. Neural networks and perceptual learning

    Science.gov (United States)

    Tsodyks, Misha; Gilbert, Charles

    2005-01-01

    Sensory perception is a learned trait. The brain strategies we use to perceive the world are constantly modified by experience. With practice, we subconsciously become better at identifying familiar objects or distinguishing fine details in our environment. Current theoretical models simulate some properties of perceptual learning, but neglect the underlying cortical circuits. Future neural network models must incorporate the top-down alteration of cortical function by expectation or perceptual tasks. These newly found dynamic processes are challenging earlier views of static and feedforward processing of sensory information. PMID:15483598

  6. Optimization with Potts Neural Networks

    Science.gov (United States)

    Söderberg, Bo

    The Potts Neural Network approach to non-binary discrete optimization problems is described. It applies to problems that can be described as a set of elementary `multiple choice' options. Instead of the conventional binary (Ising) neurons, mean field Potts neurons, having several available states, are used to describe the elementary degrees of freedom of such problems. The dynamics consists of iterating the mean field equations with annealing until convergence. Due to its deterministic character, the method is quite fast. When applied to problems of Graph Partition and scheduling types, it produces very good solutions also for problems of considerable size.

  7. 23rd Workshop of the Italian Neural Networks Society (SIREN)

    CERN Document Server

    Esposito, Anna; Morabito, Francesco

    2014-01-01

    This volume collects a selection of contributions which has been presented at the 23rd Italian Workshop on Neural Networks, the yearly meeting of the Italian Society for Neural Networks (SIREN). The conference was held in Vietri sul Mare, Salerno, Italy during May 23-24, 2013. The annual meeting of SIREN is sponsored by International Neural Network Society (INNS), European Neural Network Society (ENNS) and IEEE Computational Intelligence Society (CIS). The book – as well as the workshop-  is organized in two main components, a special session and a group of regular sessions featuring different aspects and point of views of artificial neural networks, artificial and natural intelligence, as well as psychological and cognitive theories for modeling human behaviors and human machine interactions, including Information Communication applications of compelling interest.  .

  8. Three dimensional living neural networks

    Science.gov (United States)

    Linnenberger, Anna; McLeod, Robert R.; Basta, Tamara; Stowell, Michael H. B.

    2015-08-01

    We investigate holographic optical tweezing combined with step-and-repeat maskless projection micro-stereolithography for fine control of 3D positioning of living cells within a 3D microstructured hydrogel grid. Samples were fabricated using three different cell lines; PC12, NT2/D1 and iPSC. PC12 cells are a rat cell line capable of differentiation into neuron-like cells NT2/D1 cells are a human cell line that exhibit biochemical and developmental properties similar to that of an early embryo and when exposed to retinoic acid the cells differentiate into human neurons useful for studies of human neurological disease. Finally induced pluripotent stem cells (iPSC) were utilized with the goal of future studies of neural networks fabricated from human iPSC derived neurons. Cells are positioned in the monomer solution with holographic optical tweezers at 1064 nm and then are encapsulated by photopolymerization of polyethylene glycol (PEG) hydrogels formed by thiol-ene photo-click chemistry via projection of a 512x512 spatial light modulator (SLM) illuminated at 405 nm. Fabricated samples are incubated in differentiation media such that cells cease to divide and begin to form axons or axon-like structures. By controlling the position of the cells within the encapsulating hydrogel structure the formation of the neural circuits is controlled. The samples fabricated with this system are a useful model for future studies of neural circuit formation, neurological disease, cellular communication, plasticity, and repair mechanisms.

  9. The Laplacian spectrum of neural networks

    Science.gov (United States)

    de Lange, Siemon C.; de Reus, Marcel A.; van den Heuvel, Martijn P.

    2014-01-01

    The brain is a complex network of neural interactions, both at the microscopic and macroscopic level. Graph theory is well suited to examine the global network architecture of these neural networks. Many popular graph metrics, however, encode average properties of individual network elements. Complementing these “conventional” graph metrics, the eigenvalue spectrum of the normalized Laplacian describes a network's structure directly at a systems level, without referring to individual nodes or connections. In this paper, the Laplacian spectra of the macroscopic anatomical neuronal networks of the macaque and cat, and the microscopic network of the Caenorhabditis elegans were examined. Consistent with conventional graph metrics, analysis of the Laplacian spectra revealed an integrative community structure in neural brain networks. Extending previous findings of overlap of network attributes across species, similarity of the Laplacian spectra across the cat, macaque and C. elegans neural networks suggests a certain level of consistency in the overall architecture of the anatomical neural networks of these species. Our results further suggest a specific network class for neural networks, distinct from conceptual small-world and scale-free models as well as several empirical networks. PMID:24454286

  10. Hindcasting of storm waves using neural networks

    Digital Repository Service at National Institute of Oceanography (India)

    Rao, S.; Mandal, S.

    of any exogenous input requirement makes the network attractive. A neural network is an information processing system modeled on the structure of the human brain. Its merit is the ability to deal with fuzzy information whose interrelation is ambiguous...

  11. Autonomous Navigation Apparatus With Neural Network for a Mobile Vehicle

    Science.gov (United States)

    Quraishi, Naveed (Inventor)

    1996-01-01

    An autonomous navigation system for a mobile vehicle arranged to move within an environment includes a plurality of sensors arranged on the vehicle and at least one neural network including an input layer coupled to the sensors, a hidden layer coupled to the input layer, and an output layer coupled to the hidden layer. The neural network produces output signals representing respective positions of the vehicle, such as the X coordinate, the Y coordinate, and the angular orientation of the vehicle. A plurality of patch locations within the environment are used to train the neural networks to produce the correct outputs in response to the distances sensed.

  12. Drift chamber tracking with neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Lindsey, C.S.; Denby, B.; Haggerty, H.

    1992-10-01

    We discuss drift chamber tracking with a commercial log VLSI neural network chip. Voltages proportional to the drift times in a 4-layer drift chamber were presented to the Intel ETANN chip. The network was trained to provide the intercept and slope of straight tracks traversing the chamber. The outputs were recorded and later compared off line to conventional track fits. Two types of network architectures were studied. Applications of neural network tracking to high energy physics detector triggers is discussed.

  13. Radiation Behavior of Analog Neural Network Chip

    Science.gov (United States)

    Langenbacher, H.; Zee, F.; Daud, T.; Thakoor, A.

    1996-01-01

    A neural network experiment conducted for the Space Technology Research Vehicle (STRV-1) 1-b launched in June 1994. Identical sets of analog feed-forward neural network chips was used to study and compare the effects of space and ground radiation on the chips. Three failure mechanisms are noted.

  14. Neural network approach to parton distributions fitting

    CERN Document Server

    Piccione, Andrea; Forte, Stefano; Latorre, Jose I.; Rojo, Joan; Piccione, Andrea; Rojo, Joan

    2006-01-01

    We will show an application of neural networks to extract information on the structure of hadrons. A Monte Carlo over experimental data is performed to correctly reproduce data errors and correlations. A neural network is then trained on each Monte Carlo replica via a genetic algorithm. Results on the proton and deuteron structure functions, and on the nonsinglet parton distribution will be shown.

  15. Self-organization of neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Clark, J.W.; Winston, J.V.; Rafelski, J.

    1984-05-14

    The plastic development of a neural-network model operating autonomously in discrete time is described by the temporal modification of interneuronal coupling strengths according to momentary neural activity. A simple algorithm (brainwashing) is found which, applied to nets with initially quasirandom connectivity, leads to model networks with properties conducive to the simulation of memory and learning phenomena. 18 references, 2 figures.

  16. Hidden neural networks: application to speech recognition

    DEFF Research Database (Denmark)

    Riis, Søren Kamaric

    1998-01-01

    We evaluate the hidden neural network HMM/NN hybrid on two speech recognition benchmark tasks; (1) task independent isolated word recognition on the Phonebook database, and (2) recognition of broad phoneme classes in continuous speech from the TIMIT database. It is shown how hidden neural networks...

  17. Genetic Algorithm Optimized Neural Networks Ensemble as ...

    African Journals Online (AJOL)

    Improvements in neural network calibration models by a novel approach using neural network ensemble (NNE) for the simultaneous spectrophotometric multicomponent analysis are suggested, with a study on the estimation of the components of an antihypertensive combination, namely, atenolol and losartan potassium.

  18. Neural Networks for Non-linear Control

    DEFF Research Database (Denmark)

    Sørensen, O.

    1994-01-01

    This paper describes how a neural network, structured as a Multi Layer Perceptron, is trained to predict, simulate and control a non-linear process.......This paper describes how a neural network, structured as a Multi Layer Perceptron, is trained to predict, simulate and control a non-linear process....

  19. Application of Neural Networks for Energy Reconstruction

    CERN Document Server

    Damgov, Jordan

    2002-01-01

    The possibility to use Neural Networks for reconstruction ofthe energy deposited in the calorimetry system of the CMS detector is investigated. It is shown that using feed-forward neural network, good linearity, Gaussian energy distribution and good energy resolution can be achieved. Significant improvement of the energy resolution and linearity is reached in comparison with other weighting methods for energy reconstruction.

  20. Neural Network to Solve Concave Games

    OpenAIRE

    Zixin Liu; Nengfa Wang

    2014-01-01

    The issue on neural network method to solve concave games is concerned. Combined with variational inequality, Ky Fan inequality, and projection equation, concave games are transformed into a neural network model. On the basis of the Lyapunov stable theory, some stability results are also given. Finally, two classic games’ simulation results are given to illustrate the theoretical results.

  1. Recognizing changing seasonal patterns using neural networks

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans); G. Draisma (Gerrit)

    1997-01-01

    textabstractIn this paper we propose a graphical method based on an artificial neural network model to investigate how and when seasonal patterns in macroeconomic time series change over time. Neural networks are useful since the hidden layer units may become activated only in certain seasons or

  2. Adaptive Neurons For Artificial Neural Networks

    Science.gov (United States)

    Tawel, Raoul

    1990-01-01

    Training time decreases dramatically. In improved mathematical model of neural-network processor, temperature of neurons (in addition to connection strengths, also called weights, of synapses) varied during supervised-learning phase of operation according to mathematical formalism and not heuristic rule. Evidence that biological neural networks also process information at neuronal level.

  3. Initialization of multilayer forecasting artifical neural networks

    OpenAIRE

    Bochkarev, Vladimir V.; Maslennikova, Yulia S.

    2014-01-01

    In this paper, a new method was developed for initialising artificial neural networks predicting dynamics of time series. Initial weighting coefficients were determined for neurons analogously to the case of a linear prediction filter. Moreover, to improve the accuracy of the initialization method for a multilayer neural network, some variants of decomposition of the transformation matrix corresponding to the linear prediction filter were suggested. The efficiency of the proposed neural netwo...

  4. Architecture Analysis of an FPGA-Based Hopfield Neural Network

    Directory of Open Access Journals (Sweden)

    Miguel Angelo de Abreu de Sousa

    2014-01-01

    Full Text Available Interconnections between electronic circuits and neural computation have been a strongly researched topic in the machine learning field in order to approach several practical requirements, including decreasing training and operation times in high performance applications and reducing cost, size, and energy consumption for autonomous or embedded developments. Field programmable gate array (FPGA hardware shows some inherent features typically associated with neural networks, such as, parallel processing, modular executions, and dynamic adaptation, and works on different types of FPGA-based neural networks were presented in recent years. This paper aims to address different aspects of architectural characteristics analysis on a Hopfield Neural Network implemented in FPGA, such as maximum operating frequency and chip-area occupancy according to the network capacity. Also, the FPGA implementation methodology, which does not employ multipliers in the architecture developed for the Hopfield neural model, is presented, in detail.

  5. Clustering: a neural network approach.

    Science.gov (United States)

    Du, K-L

    2010-01-01

    Clustering is a fundamental data analysis method. It is widely used for pattern recognition, feature extraction, vector quantization (VQ), image segmentation, function approximation, and data mining. As an unsupervised classification technique, clustering identifies some inherent structures present in a set of objects based on a similarity measure. Clustering methods can be based on statistical model identification (McLachlan & Basford, 1988) or competitive learning. In this paper, we give a comprehensive overview of competitive learning based clustering methods. Importance is attached to a number of competitive learning based clustering neural networks such as the self-organizing map (SOM), the learning vector quantization (LVQ), the neural gas, and the ART model, and clustering algorithms such as the C-means, mountain/subtractive clustering, and fuzzy C-means (FCM) algorithms. Associated topics such as the under-utilization problem, fuzzy clustering, robust clustering, clustering based on non-Euclidean distance measures, supervised clustering, hierarchical clustering as well as cluster validity are also described. Two examples are given to demonstrate the use of the clustering methods.

  6. Complex-valued Neural Networks

    Science.gov (United States)

    Hirose, Akira

    This paper reviews the features and applications of complex-valued neural networks (CVNNs). First we list the present application fields, and describe the advantages of the CVNNs in two application examples, namely, an adaptive plastic-landmine visualization system and an optical frequency-domain-multiplexed learning logic circuit. Then we briefly discuss the features of complex number itself to find that the phase rotation is the most significant concept, which is very useful in processing the information related to wave phenomena such as lightwave and electromagnetic wave. The CVNNs will also be an indispensable framework of the future microelectronic information-processing hardware where the quantum electron wave plays the principal role.

  7. Collision avoidance using neural networks

    Science.gov (United States)

    Sugathan, Shilpa; Sowmya Shree, B. V.; Warrier, Mithila R.; Vidhyapathi, C. M.

    2017-11-01

    Now a days, accidents on roads are caused due to the negligence of drivers and pedestrians or due to unexpected obstacles that come into the vehicle’s path. In this paper, a model (robot) is developed to assist drivers for a smooth travel without accidents. It reacts to the real time obstacles on the four critical sides of the vehicle and takes necessary action. The sensor used for detecting the obstacle was an IR proximity sensor. A single layer perceptron neural network is used to train and test all possible combinations of sensors result by using Matlab (offline). A microcontroller (ARM Cortex-M3 LPC1768) is used to control the vehicle through the output data which is received from Matlab via serial communication. Hence, the vehicle becomes capable of reacting to any combination of real time obstacles.

  8. Program Aids Simulation Of Neural Networks

    Science.gov (United States)

    Baffes, Paul T.

    1990-01-01

    Computer program NETS - Tool for Development and Evaluation of Neural Networks - provides simulation of neural-network algorithms plus software environment for development of such algorithms. Enables user to customize patterns of connections between layers of network, and provides features for saving weight values of network, providing for more precise control over learning process. Consists of translating problem into format using input/output pairs, designing network configuration for problem, and finally training network with input/output pairs until acceptable error reached. Written in C.

  9. Discriminating lysosomal membrane protein types using dynamic neural network.

    Science.gov (United States)

    Tripathi, Vijay; Gupta, Dwijendra Kumar

    2014-01-01

    This work presents a dynamic artificial neural network methodology, which classifies the proteins into their classes from their sequences alone: the lysosomal membrane protein classes and the various other membranes protein classes. In this paper, neural networks-based lysosomal-associated membrane protein type prediction system is proposed. Different protein sequence representations are fused to extract the features of a protein sequence, which includes seven feature sets; amino acid (AA) composition, sequence length, hydrophobic group, electronic group, sum of hydrophobicity, R-group, and dipeptide composition. To reduce the dimensionality of the large feature vector, we applied the principal component analysis. The probabilistic neural network, generalized regression neural network, and Elman regression neural network (RNN) are used as classifiers and compared with layer recurrent network (LRN), a dynamic network. The dynamic networks have memory, i.e. its output depends not only on the input but the previous outputs also. Thus, the accuracy of LRN classifier among all other artificial neural networks comes out to be the highest. The overall accuracy of jackknife cross-validation is 93.2% for the data-set. These predicted results suggest that the method can be effectively applied to discriminate lysosomal associated membrane proteins from other membrane proteins (Type-I, Outer membrane proteins, GPI-Anchored) and Globular proteins, and it also indicates that the protein sequence representation can better reflect the core feature of membrane proteins than the classical AA composition.

  10. A renaissance of neural networks in drug discovery.

    Science.gov (United States)

    Baskin, Igor I; Winkler, David; Tetko, Igor V

    2016-08-01

    Neural networks are becoming a very popular method for solving machine learning and artificial intelligence problems. The variety of neural network types and their application to drug discovery requires expert knowledge to choose the most appropriate approach. In this review, the authors discuss traditional and newly emerging neural network approaches to drug discovery. Their focus is on backpropagation neural networks and their variants, self-organizing maps and associated methods, and a relatively new technique, deep learning. The most important technical issues are discussed including overfitting and its prevention through regularization, ensemble and multitask modeling, model interpretation, and estimation of applicability domain. Different aspects of using neural networks in drug discovery are considered: building structure-activity models with respect to various targets; predicting drug selectivity, toxicity profiles, ADMET and physicochemical properties; characteristics of drug-delivery systems and virtual screening. Neural networks continue to grow in importance for drug discovery. Recent developments in deep learning suggests further improvements may be gained in the analysis of large chemical data sets. It's anticipated that neural networks will be more widely used in drug discovery in the future, and applied in non-traditional areas such as drug delivery systems, biologically compatible materials, and regenerative medicine.

  11. Learning Processes of Layered Neural Networks

    OpenAIRE

    Fujiki, Sumiyoshi; FUJIKI, Nahomi, M.

    1995-01-01

    A positive reinforcement type learning algorithm is formulated for a stochastic feed-forward neural network, and a learning equation similar to that of the Boltzmann machine algorithm is obtained. By applying a mean field approximation to the same stochastic feed-forward neural network, a deterministic analog feed-forward network is obtained and the back-propagation learning rule is re-derived.

  12. UAV Trajectory Modeling Using Neural Networks

    Science.gov (United States)

    Xue, Min

    2017-01-01

    Large amount of small Unmanned Aerial Vehicles (sUAVs) are projected to operate in the near future. Potential sUAV applications include, but not limited to, search and rescue, inspection and surveillance, aerial photography and video, precision agriculture, and parcel delivery. sUAVs are expected to operate in the uncontrolled Class G airspace, which is at or below 500 feet above ground level (AGL), where many static and dynamic constraints exist, such as ground properties and terrains, restricted areas, various winds, manned helicopters, and conflict avoidance among sUAVs. How to enable safe, efficient, and massive sUAV operations at the low altitude airspace remains a great challenge. NASA's Unmanned aircraft system Traffic Management (UTM) research initiative works on establishing infrastructure and developing policies, requirement, and rules to enable safe and efficient sUAVs' operations. To achieve this goal, it is important to gain insights of future UTM traffic operations through simulations, where the accurate trajectory model plays an extremely important role. On the other hand, like what happens in current aviation development, trajectory modeling should also serve as the foundation for any advanced concepts and tools in UTM. Accurate models of sUAV dynamics and control systems are very important considering the requirement of the meter level precision in UTM operations. The vehicle dynamics are relatively easy to derive and model, however, vehicle control systems remain unknown as they are usually kept by manufactures as a part of intellectual properties. That brings challenges to trajectory modeling for sUAVs. How to model the vehicle's trajectories with unknown control system? This work proposes to use a neural network to model a vehicle's trajectory. The neural network is first trained to learn the vehicle's responses at numerous conditions. Once being fully trained, given current vehicle states, winds, and desired future trajectory, the neural

  13. Research of The Deeper Neural Networks

    Directory of Open Access Journals (Sweden)

    Xiao You Rong

    2016-01-01

    Full Text Available Neural networks (NNs have powerful computational abilities and could be used in a variety of applications; however, training these networks is still a difficult problem. With different network structures, many neural models have been constructed. In this report, a deeper neural networks (DNNs architecture is proposed. The training algorithm of deeper neural network insides searching the global optimal point in the actual error surface. Before the training algorithm is designed, the error surface of the deeper neural network is analyzed from simple to complicated, and the features of the error surface is obtained. Based on these characters, the initialization method and training algorithm of DNNs is designed. For the initialization, a block-uniform design method is proposed which separates the error surface into some blocks and finds the optimal block using the uniform design method. For the training algorithm, the improved gradient-descent method is proposed which adds a penalty term into the cost function of the old gradient descent method. This algorithm makes the network have a great approximating ability and keeps the network state stable. All of these improve the practicality of the neural network.

  14. Neural network topology design for nonlinear control

    Science.gov (United States)

    Haecker, Jens; Rudolph, Stephan

    2001-03-01

    Neural networks, especially in nonlinear system identification and control applications, are typically considered to be black-boxes which are difficult to analyze and understand mathematically. Due to this reason, an in- depth mathematical analysis offering insight into the different neural network transformation layers based on a theoretical transformation scheme is desired, but up to now neither available nor known. In previous works it has been shown how proven engineering methods such as dimensional analysis and the Laplace transform may be used to construct a neural controller topology for time-invariant systems. Using the knowledge of neural correspondences of these two classical methods, the internal nodes of the network could also be successfully interpreted after training. As further extension to these works, the paper describes the latest of a theoretical interpretation framework describing the neural network transformation sequences in nonlinear system identification and control. This can be achieved By incorporation of the method of exact input-output linearization in the above mentioned two transform sequences of dimensional analysis and the Laplace transformation. Based on these three theoretical considerations neural network topologies may be designed in special situations by pure translation in the sense of a structural compilation of the known classical solutions into their correspondent neural topology. Based on known exemplary results, the paper synthesizes the proposed approach into the visionary goals of a structural compiler for neural networks. This structural compiler for neural networks is intended to automatically convert classical control formulations into their equivalent neural network structure based on the principles of equivalence between formula and operator, and operator and structure which are discussed in detail in this work.

  15. Instrumentation for Scientific Computing in Neural Networks, Information Science, Artificial Intelligence, and Applied Mathematics.

    Science.gov (United States)

    1987-10-01

    include Security Classification) Instrumentation for scientific computing in neural networks, information science, artificial intelligence, and...instrumentation grant to purchase equipment for support of research in neural networks, information science, artificail intellignece , and applied mathematics...in Neural Networks, Information Science, Artificial Intelligence, and Applied Mathematics Contract AFOSR 86-0282 Principal Investigator: Stephen

  16. Optical implementation of neural networks

    Science.gov (United States)

    Yu, Francis T. S.; Guo, Ruyan

    2002-12-01

    An adaptive optical neuro-computing (ONC) using inexpensive pocket size liquid crystal televisions (LCTVs) had been developed by the graduate students in the Electro-Optics Laboratory at The Pennsylvania State University. Although this neuro-computing has only 8×8=64 neurons, it can be easily extended to 16×20=320 neurons. The major advantages of this LCTV architecture as compared with other reported ONCs, are low cost and the flexibility to operate. To test the performance, several neural net models are used. These models are Interpattern Association, Hetero-association and unsupervised learning algorithms. The system design considerations and experimental demonstrations are also included.

  17. Optical production systems using neural networks and symbolic substitution

    Science.gov (United States)

    Botha, Elizabeth; Casasent, David; Barnard, Etienne

    1988-01-01

    Two optical implementations of production systems are advanced. The production systems operate on a knowledge base where facts and rules are encoded as formulas in propositional calculus. The first implementation is a binary neural network. An analog neural network is used to include reasoning with uncertainties. The second implementation uses a new optical symbolic substitution correlator. This implementation is useful when a set of similar situations has to be handled in parallel on one processor.

  18. Recurrent neural networks training with stable bounding ellipsoid algorithm.

    Science.gov (United States)

    Yu, Wen; de Jesús Rubio, José

    2009-06-01

    Bounding ellipsoid (BE) algorithms offer an attractive alternative to traditional training algorithms for neural networks, for example, backpropagation and least squares methods. The benefits include high computational efficiency and fast convergence speed. In this paper, we propose an ellipsoid propagation algorithm to train the weights of recurrent neural networks for nonlinear systems identification. Both hidden layers and output layers can be updated. The stability of the BE algorithm is proven.

  19. Classification-based Financial Markets Prediction using Deep Neural Networks

    OpenAIRE

    Dixon, Matthew; Klabjan, Diego; Bang, Jin Hoon

    2016-01-01

    Deep neural networks (DNNs) are powerful types of artificial neural networks (ANNs) that use several hidden layers. They have recently gained considerable attention in the speech transcription and image recognition community (Krizhevsky et al., 2012) for their superior predictive properties including robustness to overfitting. However their application to algorithmic trading has not been previously researched, partly because of their computational complexity. This paper describes the applicat...

  20. Genetic algorithm for neural networks optimization

    Science.gov (United States)

    Setyawati, Bina R.; Creese, Robert C.; Sahirman, Sidharta

    2004-11-01

    This paper examines the forecasting performance of multi-layer feed forward neural networks in modeling a particular foreign exchange rates, i.e. Japanese Yen/US Dollar. The effects of two learning methods, Back Propagation and Genetic Algorithm, in which the neural network topology and other parameters fixed, were investigated. The early results indicate that the application of this hybrid system seems to be well suited for the forecasting of foreign exchange rates. The Neural Networks and Genetic Algorithm were programmed using MATLAB«.

  1. Estimation of Conditional Quantile using Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1999-01-01

    The problem of estimating conditional quantiles using neural networks is investigated here. A basic structure is developed using the methodology of kernel estimation, and a theory guaranteeing con-sistency on a mild set of assumptions is provided. The constructed structure constitutes a basis...... for the design of a variety of different neural networks, some of which are considered in detail. The task of estimating conditional quantiles is related to Bayes point estimation whereby a broad range of applications within engineering, economics and management can be suggested. Numerical results illustrating...... the capabilities of the elaborated neural network are also given....

  2. Vectorized algorithms for spiking neural network simulation.

    Science.gov (United States)

    Brette, Romain; Goodman, Dan F M

    2011-06-01

    High-level languages (Matlab, Python) are popular in neuroscience because they are flexible and accelerate development. However, for simulating spiking neural networks, the cost of interpretation is a bottleneck. We describe a set of algorithms to simulate large spiking neural networks efficiently with high-level languages using vector-based operations. These algorithms constitute the core of Brian, a spiking neural network simulator written in the Python language. Vectorized simulation makes it possible to combine the flexibility of high-level languages with the computational efficiency usually associated with compiled languages.

  3. Convolutional Neural Network for Image Recognition

    CERN Document Server

    Seifnashri, Sahand

    2015-01-01

    The aim of this project is to use machine learning techniques especially Convolutional Neural Networks for image processing. These techniques can be used for Quark-Gluon discrimination using calorimeters data, but unfortunately I didn’t manage to get the calorimeters data and I just used the Jet data fromminiaodsim(ak4 chs). The Jet data was not good enough for Convolutional Neural Network which is designed for ’image’ recognition. This report is made of twomain part, part one is mainly about implementing Convolutional Neural Network on unphysical data such as MNIST digits and CIFAR-10 dataset and part 2 is about the Jet data.

  4. Neural Network and Letter Recognition.

    Science.gov (United States)

    Lee, Hue Yeon

    Neural net architectures and learning algorithms that recognize hand written 36 alphanumeric characters are studied. The thin line input patterns written in 32 x 32 binary array are used. The system is comprised of two major components, viz. a preprocessing unit and a Recognition unit. The preprocessing unit in turn consists of three layers of neurons; the U-layer, the V-layer, and the C -layer. The functions of the U-layer is to extract local features by template matching. The correlation between the detected local features are considered. Through correlating neurons in a plane with their neighboring neurons, the V-layer would thicken the on-cells or lines that are groups of on-cells of the previous layer. These two correlations would yield some deformation tolerance and some of the rotational tolerance of the system. The C-layer then compresses data through the 'Gabor' transform. Pattern dependent choice of center and wavelengths of 'Gabor' filters is the cause of shift and scale tolerance of the system. Three different learning schemes had been investigated in the recognition unit, namely; the error back propagation learning with hidden units, a simple perceptron learning, and a competitive learning. Their performances were analyzed and compared. Since sometimes the network fails to distinguish between two letters that are inherently similar, additional ambiguity resolving neural nets are introduced on top of the above main neural net. The two dimensional Fourier transform is used as the preprocessing and the perceptron is used as the recognition unit of the ambiguity resolver. One hundred different person's handwriting sets are collected. Some of these are used as the training sets and the remainders are used as the test sets. The correct recognition rate of the system increases with the number of training sets and eventually saturates at a certain value. Similar recognition rates are obtained for the above three different learning algorithms. The minimum error

  5. Nonequilibrium landscape theory of neural networks

    Science.gov (United States)

    Yan, Han; Zhao, Lei; Hu, Liang; Wang, Xidi; Wang, Erkang; Wang, Jin

    2013-01-01

    The brain map project aims to map out the neuron connections of the human brain. Even with all of the wirings mapped out, the global and physical understandings of the function and behavior are still challenging. Hopfield quantified the learning and memory process of symmetrically connected neural networks globally through equilibrium energy. The energy basins of attractions represent memories, and the memory retrieval dynamics is determined by the energy gradient. However, the realistic neural networks are asymmetrically connected, and oscillations cannot emerge from symmetric neural networks. Here, we developed a nonequilibrium landscape–flux theory for realistic asymmetrically connected neural networks. We uncovered the underlying potential landscape and the associated Lyapunov function for quantifying the global stability and function. We found the dynamics and oscillations in human brains responsible for cognitive processes and physiological rhythm regulations are determined not only by the landscape gradient but also by the flux. We found that the flux is closely related to the degrees of the asymmetric connections in neural networks and is the origin of the neural oscillations. The neural oscillation landscape shows a closed-ring attractor topology. The landscape gradient attracts the network down to the ring. The flux is responsible for coherent oscillations on the ring. We suggest the flux may provide the driving force for associations among memories. We applied our theory to rapid-eye movement sleep cycle. We identified the key regulation factors for function through global sensitivity analysis of landscape topography against wirings, which are in good agreements with experiments. PMID:24145451

  6. Neural Network for Estimating Conditional Distribution

    DEFF Research Database (Denmark)

    Schiøler, Henrik; Kulczycki, P.

    Neural networks for estimating conditional distributions and their associated quantiles are investigated in this paper. A basic network structure is developed on the basis of kernel estimation theory, and consistency is proved from a mild set of assumptions. A number of applications within...... statistcs, decision theory and signal processing are suggested, and a numerical example illustrating the capabilities of the elaborated network is given...

  7. Person Movement Prediction Using Neural Networks

    OpenAIRE

    Vintan, Lucian; Gellert, Arpad; Petzold, Jan; Ungerer, Theo

    2006-01-01

    Ubiquitous systems use context information to adapt appliance behavior to human needs. Even more convenience is reached if the appliance foresees the user's desires and acts proactively. This paper proposes neural prediction techniques to anticipate a person's next movement. We focus on neural predictors (multi-layer perceptron with back-propagation learning) with and without pre-training. The optimal configuration of the neural network is determined by evaluating movement sequences of real p...

  8. Deep Learning Neural Networks and Bayesian Neural Networks in Data Analysis

    Science.gov (United States)

    Chernoded, Andrey; Dudko, Lev; Myagkov, Igor; Volkov, Petr

    2017-10-01

    Most of the modern analyses in high energy physics use signal-versus-background classification techniques of machine learning methods and neural networks in particular. Deep learning neural network is the most promising modern technique to separate signal and background and now days can be widely and successfully implemented as a part of physical analysis. In this article we compare Deep learning and Bayesian neural networks application as a classifiers in an instance of top quark analysis.

  9. Deep Learning Neural Networks and Bayesian Neural Networks in Data Analysis

    Directory of Open Access Journals (Sweden)

    Chernoded Andrey

    2017-01-01

    Full Text Available Most of the modern analyses in high energy physics use signal-versus-background classification techniques of machine learning methods and neural networks in particular. Deep learning neural network is the most promising modern technique to separate signal and background and now days can be widely and successfully implemented as a part of physical analysis. In this article we compare Deep learning and Bayesian neural networks application as a classifiers in an instance of top quark analysis.

  10. [Medical use of artificial neural networks].

    Science.gov (United States)

    Molnár, B; Papik, K; Schaefer, R; Dombóvári, Z; Fehér, J; Tulassay, Z

    1998-01-04

    The main aim of the research in medical diagnostics is to develop more exact, cost-effective and handsome systems, procedures and methods for supporting the clinicians. In their paper the authors introduce a new method that recently came into the focus referred to as artificial neural networks. Based on the literature of the past 5-6 years they give a brief review--highlighting the most important ones--showing the idea behind neural networks, what they are used for in the medical field. The definition, structure and operation of neural networks are discussed. In the application part they collect examples in order to give an insight in the neural network application research. It is emphasised that in the near future basically new diagnostic equipment can be developed based on this new technology in the field of ECG, EEG and macroscopic and microscopic image analysis systems.

  11. Additive Feed Forward Control with Neural Networks

    DEFF Research Database (Denmark)

    Sørensen, O.

    1999-01-01

    This paper demonstrates a method to control a non-linear, multivariable, noisy process using trained neural networks. The basis for the method is a trained neural network controller acting as the inverse process model. A training method for obtaining such an inverse process model is applied....... A suitable 'shaped' (low-pass filtered) reference is used to overcome problems with excessive control action when using a controller acting as the inverse process model. The control concept is Additive Feed Forward Control, where the trained neural network controller, acting as the inverse process model......, is placed in a supplementary pure feed-forward path to an existing feedback controller. This concept benefits from the fact, that an existing, traditional designed, feedback controller can be retained without any modifications, and after training the connection of the neural network feed-forward controller...

  12. Blood glucose prediction using neural network

    Science.gov (United States)

    Soh, Chit Siang; Zhang, Xiqin; Chen, Jianhong; Raveendran, P.; Soh, Phey Hong; Yeo, Joon Hock

    2008-02-01

    We used neural network for blood glucose level determination in this study. The data set used in this study was collected using a non-invasive blood glucose monitoring system with six laser diodes, each laser diode operating at distinct near infrared wavelength between 1500nm and 1800nm. The neural network is specifically used to determine blood glucose level of one individual who participated in an oral glucose tolerance test (OGTT) session. Partial least squares regression is also used for blood glucose level determination for the purpose of comparison with the neural network model. The neural network model performs better in the prediction of blood glucose level as compared with the partial least squares model.

  13. PREDIKSI FOREX MENGGUNAKAN MODEL NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    R. Hadapiningradja Kusumodestoni

    2015-11-01

    Full Text Available ABSTRAK Prediksi adalah salah satu teknik yang paling penting dalam menjalankan bisnis forex. Keputusan dalam memprediksi adalah sangatlah penting, karena dengan prediksi dapat membantu mengetahui nilai forex di waktu tertentu kedepan sehingga dapat mengurangi resiko kerugian. Tujuan dari penelitian ini dimaksudkan memprediksi bisnis fores menggunakan model neural network dengan data time series per 1 menit untuk mengetahui nilai akurasi prediksi sehingga dapat mengurangi resiko dalam menjalankan bisnis forex. Metode penelitian pada penelitian ini meliputi metode pengumpulan data kemudian dilanjutkan ke metode training, learning, testing menggunakan neural network. Setelah di evaluasi hasil penelitian ini menunjukan bahwa penerapan algoritma Neural Network mampu untuk memprediksi forex dengan tingkat akurasi prediksi 0.431 +/- 0.096 sehingga dengan prediksi ini dapat membantu mengurangi resiko dalam menjalankan bisnis forex. Kata kunci: prediksi, forex, neural network.

  14. Using Neural Networks in Diagnosing Breast Cancer

    National Research Council Canada - National Science Library

    Fogel, David

    1997-01-01

    .... In the current study, evolutionary programming is used to train neural networks and linear discriminant models to detect breast cancer in suspicious and microcalcifications using radiographic features and patient age...

  15. Neural Networks in Mobile Robot Motion

    Directory of Open Access Journals (Sweden)

    Danica Janglová

    2004-03-01

    Full Text Available This paper deals with a path planning and intelligent control of an autonomous robot which should move safely in partially structured environment. This environment may involve any number of obstacles of arbitrary shape and size; some of them are allowed to move. We describe our approach to solving the motion-planning problem in mobile robot control using neural networks-based technique. Our method of the construction of a collision-free path for moving robot among obstacles is based on two neural networks. The first neural network is used to determine the “free” space using ultrasound range finder data. The second neural network “finds” a safe direction for the next robot section of the path in the workspace while avoiding the nearest obstacles. Simulation examples of generated path with proposed techniques will be presented.

  16. Isolated Speech Recognition Using Artificial Neural Networks

    National Research Council Canada - National Science Library

    Polur, Prasad

    2001-01-01

    .... A small size vocabulary containing the words YES and NO is chosen. Spectral features using cepstral analysis are extracted per frame and imported to a feedforward neural network which uses a backpropagation with momentum training algorithm...

  17. Control of autonomous robot using neural networks

    Science.gov (United States)

    Barton, Adam; Volna, Eva

    2017-07-01

    The aim of the article is to design a method of control of an autonomous robot using artificial neural networks. The introductory part describes control issues from the perspective of autonomous robot navigation and the current mobile robots controlled by neural networks. The core of the article is the design of the controlling neural network, and generation and filtration of the training set using ART1 (Adaptive Resonance Theory). The outcome of the practical part is an assembled Lego Mindstorms EV3 robot solving the problem of avoiding obstacles in space. To verify models of an autonomous robot behavior, a set of experiments was created as well as evaluation criteria. The speed of each motor was adjusted by the controlling neural network with respect to the situation in which the robot was found.

  18. Neural Networks in Mobile Robot Motion

    Directory of Open Access Journals (Sweden)

    Danica Janglova

    2008-11-01

    Full Text Available This paper deals with a path planning and intelligent control of an autonomous robot which should move safely in partially structured environment. This environment may involve any number of obstacles of arbitrary shape and size; some of them are allowed to move. We describe our approach to solving the motion-planning problem in mobile robot control using neural networks-based technique. Our method of the construction of a collision-free path for moving robot among obstacles is based on two neural networks. The first neural network is used to determine the "free" space using ultrasound range finder data. The second neural network "finds" a safe direction for the next robot section of the path in the workspace while avoiding the nearest obstacles. Simulation examples of generated path with proposed techniques will be presented.

  19. Artificial neural networks a practical course

    CERN Document Server

    da Silva, Ivan Nunes; Andrade Flauzino, Rogerio; Liboni, Luisa Helena Bartocci; dos Reis Alves, Silas Franco

    2017-01-01

    This book provides comprehensive coverage of neural networks, their evolution, their structure, the problems they can solve, and their applications. The first half of the book looks at theoretical investigations on artificial neural networks and addresses the key architectures that are capable of implementation in various application scenarios. The second half is designed specifically for the production of solutions using artificial neural networks to solve practical problems arising from different areas of knowledge. It also describes the various implementation details that were taken into account to achieve the reported results. These aspects contribute to the maturation and improvement of experimental techniques to specify the neural network architecture that is most appropriate for a particular application scope. The book is appropriate for students in graduate and upper undergraduate courses in addition to researchers and professionals.

  20. Constructive autoassociative neural network for facial recognition.

    Directory of Open Access Journals (Sweden)

    Bruno J T Fernandes

    Full Text Available Autoassociative artificial neural networks have been used in many different computer vision applications. However, it is difficult to define the most suitable neural network architecture because this definition is based on previous knowledge and depends on the problem domain. To address this problem, we propose a constructive autoassociative neural network called CANet (Constructive Autoassociative Neural Network. CANet integrates the concepts of receptive fields and autoassociative memory in a dynamic architecture that changes the configuration of the receptive fields by adding new neurons in the hidden layer, while a pruning algorithm removes neurons from the output layer. Neurons in the CANet output layer present lateral inhibitory connections that improve the recognition rate. Experiments in face recognition and facial expression recognition show that the CANet outperforms other methods presented in the literature.

  1. Genetic Algorithm Optimized Neural Networks Ensemble as ...

    African Journals Online (AJOL)

    NJD

    Genetic Algorithm Optimized Neural Networks Ensemble as. Calibration Model for Simultaneous Spectrophotometric. Estimation of Atenolol and Losartan Potassium in Tablets. Dondeti Satyanarayana*, Kamarajan Kannan and Rajappan Manavalan. Department of Pharmacy, Annamalai University, Annamalainagar, Tamil ...

  2. Projection learning algorithm for threshold - controlled neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Reznik, A.M.

    1995-03-01

    The projection learning algorithm proposed in [1, 2] and further developed in [3] substantially improves the efficiency of memorizing information and accelerates the learning process in neural networks. This algorithm is compatible with the completely connected neural network architecture (the Hopfield network [4]), but its application to other networks involves a number of difficulties. The main difficulties include constraints on interconnection structure and the need to eliminate the state uncertainty of latent neurons if such are present in the network. Despite the encouraging preliminary results of [3], further extension of the applications of the projection algorithm therefore remains problematic. In this paper, which is a continuation of the work begun in [3], we consider threshold-controlled neural networks. Networks of this type are quite common. They represent the receptor neuron layers in some neurocomputer designs. A similar structure is observed in the lower divisions of biological sensory systems [5]. In multilayer projection neural networks with lateral interconnections, the neuron layers or parts of these layers may also have the structure of a threshold-controlled completely connected network. Here the thresholds are the potentials delivered through the projection connections from other parts of the network. The extension of the projection algorithm to the class of threshold-controlled networks may accordingly prove to be useful both for extending its technical applications and for better understanding of the operation of the nervous system in living organisms.

  3. Neural networks as models of psychopathology.

    Science.gov (United States)

    Aakerlund, L; Hemmingsen, R

    1998-04-01

    Neural network modeling is situated between neurobiology, cognitive science, and neuropsychology. The structural and functional resemblance with biological computation has made artificial neural networks (ANN) useful for exploring the relationship between neurobiology and computational performance, i.e., cognition and behavior. This review provides an introduction to the theory of ANN and how they have linked theories from neurobiology and psychopathology in schizophrenia, affective disorders, and dementia.

  4. A neural network simulation package in CLIPS

    Science.gov (United States)

    Bhatnagar, Himanshu; Krolak, Patrick D.; Mcgee, Brenda J.; Coleman, John

    1990-01-01

    The intrinsic similarity between the firing of a rule and the firing of a neuron has been captured in this research to provide a neural network development system within an existing production system (CLIPS). A very important by-product of this research has been the emergence of an integrated technique of using rule based systems in conjunction with the neural networks to solve complex problems. The systems provides a tool kit for an integrated use of the two techniques and is also extendible to accommodate other AI techniques like the semantic networks, connectionist networks, and even the petri nets. This integrated technique can be very useful in solving complex AI problems.

  5. Diabetic retinopathy screening using deep neural network.

    Science.gov (United States)

    Ramachandran, Nishanthan; Hong, Sheng Chiong; Sime, Mary J; Wilson, Graham A

    2017-09-07

    There is a burgeoning interest in the use of deep neural network in diabetic retinal screening. To determine whether a deep neural network could satisfactorily detect diabetic retinopathy that requires referral to an ophthalmologist from a local diabetic retinal screening programme and an international database. Retrospective audit. Diabetic retinal photos from Otago database photographed during October 2016 (485 photos), and 1200 photos from Messidor international database. Receiver operating characteristic curve to illustrate the ability of a deep neural network to identify referable diabetic retinopathy (moderate or worse diabetic retinopathy or exudates within one disc diameter of the fovea). Area under the receiver operating characteristic curve, sensitivity and specificity. For detecting referable diabetic retinopathy, the deep neural network had an area under receiver operating characteristic curve of 0.901 (95% confidence interval 0.807-0.995), with 84.6% sensitivity and 79.7% specificity for Otago and 0.980 (95% confidence interval 0.973-0.986), with 96.0% sensitivity and 90.0% specificity for Messidor. This study has shown that a deep neural network can detect referable diabetic retinopathy with sensitivities and specificities close to or better than 80% from both an international and a domestic (New Zealand) database. We believe that deep neural networks can be integrated into community screening once they can successfully detect both diabetic retinopathy and diabetic macular oedema. © 2017 Royal Australian and New Zealand College of Ophthalmologists.

  6. Symbolic processing in neural networks

    OpenAIRE

    Neto, João Pedro; Hava T Siegelmann; Costa,J.Félix

    2003-01-01

    In this paper we show that programming languages can be translated into recurrent (analog, rational weighted) neural nets. Implementation of programming languages in neural nets turns to be not only theoretical exciting, but has also some practical implications in the recent efforts to merge symbolic and sub symbolic computation. To be of some use, it should be carried in a context of bounded resources. Herein, we show how to use resource bounds to speed up computations over neural nets, thro...

  7. Hindcasting cyclonic waves using neural networks

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Rao, S.; Chakravarty, N.V.

    the backpropagation networks with updated algorithms are used in this paper. A brief description about the working of a back propagation neural network and three updated algorithms is given below. Backpropagation learning: Backpropagation is the most widely used... algorithm for supervised learning with multi layer feed forward networks. The idea of the backpropagation learning algorithm is the repeated application of the chain rule to compute the influence of each weight in the network with respect to an arbitrary...

  8. Neural Networks for Modeling and Control of Particle Accelerators

    Science.gov (United States)

    Edelen, A. L.; Biedron, S. G.; Chase, B. E.; Edstrom, D.; Milton, S. V.; Stabile, P.

    2016-04-01

    Particle accelerators are host to myriad nonlinear and complex physical phenomena. They often involve a multitude of interacting systems, are subject to tight performance demands, and should be able to run for extended periods of time with minimal interruptions. Often times, traditional control techniques cannot fully meet these requirements. One promising avenue is to introduce machine learning and sophisticated control techniques inspired by artificial intelligence, particularly in light of recent theoretical and practical advances in these fields. Within machine learning and artificial intelligence, neural networks are particularly well-suited to modeling, control, and diagnostic analysis of complex, nonlinear, and time-varying systems, as well as systems with large parameter spaces. Consequently, the use of neural network-based modeling and control techniques could be of significant benefit to particle accelerators. For the same reasons, particle accelerators are also ideal test-beds for these techniques. Many early attempts to apply neural networks to particle accelerators yielded mixed results due to the relative immaturity of the technology for such tasks. The purpose of this paper is to re-introduce neural networks to the particle accelerator community and report on some work in neural network control that is being conducted as part of a dedicated collaboration between Fermilab and Colorado State University (CSU). We describe some of the challenges of particle accelerator control, highlight recent advances in neural network techniques, discuss some promising avenues for incorporating neural networks into particle accelerator control systems, and describe a neural network-based control system that is being developed for resonance control of an RF electron gun at the Fermilab Accelerator Science and Technology (FAST) facility, including initial experimental results from a benchmark controller.

  9. Parametric Identification of Aircraft Loads: An Artificial Neural Network Approach

    Science.gov (United States)

    2016-03-30

    Undergraduate Student Paper Postgraduate Student Paper Parametric Identification of Aircraft Loads: An Artificial Neural Network Approach...monitoring, flight parameter, nonlinear modeling, Artificial Neural Network , typical loadcase. Introduction Aircraft load monitoring is an... Neural Networks (ANN), i.e. the BP network and Kohonen Clustering Network , are applied and revised by Kalman Filter and Genetic Algorithm to build

  10. Prototype-Incorporated Emotional Neural Network.

    Science.gov (United States)

    Oyedotun, Oyebade K; Khashman, Adnan

    2017-08-15

    Artificial neural networks (ANNs) aim to simulate the biological neural activities. Interestingly, many ''engineering'' prospects in ANN have relied on motivations from cognition and psychology studies. So far, two important learning theories that have been subject of active research are the prototype and adaptive learning theories. The learning rules employed for ANNs can be related to adaptive learning theory, where several examples of the different classes in a task are supplied to the network for adjusting internal parameters. Conversely, the prototype-learning theory uses prototypes (representative examples); usually, one prototype per class of the different classes contained in the task. These prototypes are supplied for systematic matching with new examples so that class association can be achieved. In this paper, we propose and implement a novel neural network algorithm based on modifying the emotional neural network (EmNN) model to unify the prototype- and adaptive-learning theories. We refer to our new model as ``prototype-incorporated EmNN''. Furthermore, we apply the proposed model to two real-life challenging tasks, namely, static hand-gesture recognition and face recognition, and compare the result to those obtained using the popular back-propagation neural network (BPNN), emotional BPNN (EmNN), deep networks, an exemplar classification model, and k-nearest neighbor.

  11. On sparsely connected optimal neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Beiu, V. [Los Alamos National Lab., NM (United States); Draghici, S. [Wayne State Univ., Detroit, MI (United States)

    1997-10-01

    This paper uses two different approaches to show that VLSI- and size-optimal discrete neural networks are obtained for small fan-in values. These have applications to hardware implementations of neural networks, but also reveal an intrinsic limitation of digital VLSI technology: its inability to cope with highly connected structures. The first approach is based on implementing F{sub n,m} functions. The authors show that this class of functions can be implemented in VLSI-optimal (i.e., minimizing AT{sup 2}) neural networks of small constant fan-ins. In order to estimate the area (A) and the delay (T) of such networks, the following cost functions will be used: (i) the connectivity and the number-of-bits for representing the weights and thresholds--for good estimates of the area; and (ii) the fan-ins and the length of the wires--for good approximates of the delay. The second approach is based on implementing Boolean functions for which the classical Shannon`s decomposition can be used. Such a solution has already been used to prove bounds on the size of fan-in 2 neural networks. They will generalize the result presented there to arbitrary fan-in, and prove that the size is minimized by small fan-in values. Finally, a size-optimal neural network of small constant fan-ins will be suggested for F{sub n,m} functions.

  12. Artificial neural network intelligent method for prediction

    Science.gov (United States)

    Trifonov, Roumen; Yoshinov, Radoslav; Pavlova, Galya; Tsochev, Georgi

    2017-09-01

    Accounting and financial classification and prediction problems are high challenge and researchers use different methods to solve them. Methods and instruments for short time prediction of financial operations using artificial neural network are considered. The methods, used for prediction of financial data as well as the developed forecasting system with neural network are described in the paper. The architecture of a neural network used four different technical indicators, which are based on the raw data and the current day of the week is presented. The network developed is used for forecasting movement of stock prices one day ahead and consists of an input layer, one hidden layer and an output layer. The training method is algorithm with back propagation of the error. The main advantage of the developed system is self-determination of the optimal topology of neural network, due to which it becomes flexible and more precise The proposed system with neural network is universal and can be applied to various financial instruments using only basic technical indicators as input data.

  13. Estimating Conditional Distributions by Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1998-01-01

    Neural Networks for estimating conditionaldistributions and their associated quantiles are investigated in this paper. A basic network structure is developed on the basis of kernel estimation theory, and consistency property is considered from a mild set of assumptions. A number of applications...

  14. Medical Text Classification using Convolutional Neural Networks

    OpenAIRE

    Hughes, Mark; Li, Irene; Kotoulas, Spyros; Suzumura, Toyotaro

    2017-01-01

    We present an approach to automatically classify clinical text at a sentence level. We are using deep convolutional neural networks to represent complex features. We train the network on a dataset providing a broad categorization of health information. Through a detailed evaluation, we demonstrate that our method outperforms several approaches widely used in natural language processing tasks by about 15%.

  15. Medical Text Classification Using Convolutional Neural Networks.

    Science.gov (United States)

    Hughes, Mark; Li, Irene; Kotoulas, Spyros; Suzumura, Toyotaro

    2017-01-01

    We present an approach to automatically classify clinical text at a sentence level. We are using deep convolutional neural networks to represent complex features. We train the network on a dataset providing a broad categorization of health information. Through a detailed evaluation, we demonstrate that our method outperforms several approaches widely used in natural language processing tasks by about 15%.

  16. Artificial Neural Networks and Instructional Technology.

    Science.gov (United States)

    Carlson, Patricia A.

    1991-01-01

    Artificial neural networks (ANN), part of artificial intelligence, are discussed. Such networks are fed sample cases (training sets), learn how to recognize patterns in the sample data, and use this experience in handling new cases. Two cognitive roles for ANNs (intelligent filters and spreading, associative memories) are examined. Prototypes…

  17. Visual Servoing from Deep Neural Networks

    OpenAIRE

    Bateux, Quentin; Marchand, Eric; Leitner, Jürgen; Chaumette, Francois; Corke, Peter

    2017-01-01

    International audience; We present a deep neural network-based method to perform high-precision, robust and real-time 6 DOF visual servoing. The paper describes how to create a dataset simulating various perturbations (occlusions and lighting conditions) from a single real-world image of the scene. A convolutional neural network is fine-tuned using this dataset to estimate the relative pose between two images of the same scene. The output of the network is then employed in a visual servoing c...

  18. Design of Robust Neural Network Classifiers

    DEFF Research Database (Denmark)

    Larsen, Jan; Andersen, Lars Nonboe; Hintz-Madsen, Mads

    1998-01-01

    This paper addresses a new framework for designing robust neural network classifiers. The network is optimized using the maximum a posteriori technique, i.e., the cost function is the sum of the log-likelihood and a regularization term (prior). In order to perform robust classification, we present...... a modified likelihood function which incorporates the potential risk of outliers in the data. This leads to the introduction of a new parameter, the outlier probability. Designing the neural classifier involves optimization of network weights as well as outlier probability and regularization parameters. We...

  19. Electronic device aspects of neural network memories

    Science.gov (United States)

    Lambe, J.; Moopenn, A.; Thakoor, A. P.

    1985-01-01

    The basic issues related to the electronic implementation of the neural network model (NNM) for content addressable memories are examined. A brief introduction to the principles of the NNM is followed by an analysis of the information storage of the neural network in the form of a binary connection matrix and the recall capability of such matrix memories based on a hardware simulation study. In addition, materials and device architecture issues involved in the future realization of such networks in VLSI-compatible ultrahigh-density memories are considered. A possible space application of such devices would be in the area of large-scale information storage without mechanical devices.

  20. A quantum-implementable neural network model

    Science.gov (United States)

    Chen, Jialin; Wang, Lingli; Charbon, Edoardo

    2017-10-01

    A quantum-implementable neural network, namely quantum probability neural network (QPNN) model, is proposed in this paper. QPNN can use quantum parallelism to trace all possible network states to improve the result. Due to its unique quantum nature, this model is robust to several quantum noises under certain conditions, which can be efficiently implemented by the qubus quantum computer. Another advantage is that QPNN can be used as memory to retrieve the most relevant data and even to generate new data. The MATLAB experimental results of Iris data classification and MNIST handwriting recognition show that much less neuron resources are required in QPNN to obtain a good result than the classical feedforward neural network. The proposed QPNN model indicates that quantum effects are useful for real-life classification tasks.

  1. Artificial neural network modeling of dissolved oxygen in reservoir.

    Science.gov (United States)

    Chen, Wei-Bo; Liu, Wen-Cheng

    2014-02-01

    The water quality of reservoirs is one of the key factors in the operation and water quality management of reservoirs. Dissolved oxygen (DO) in water column is essential for microorganisms and a significant indicator of the state of aquatic ecosystems. In this study, two artificial neural network (ANN) models including back propagation neural network (BPNN) and adaptive neural-based fuzzy inference system (ANFIS) approaches and multilinear regression (MLR) model were developed to estimate the DO concentration in the Feitsui Reservoir of northern Taiwan. The input variables of the neural network are determined as water temperature, pH, conductivity, turbidity, suspended solids, total hardness, total alkalinity, and ammonium nitrogen. The performance of the ANN models and MLR model was assessed through the mean absolute error, root mean square error, and correlation coefficient computed from the measured and model-simulated DO values. The results reveal that ANN estimation performances were superior to those of MLR. Comparing to the BPNN and ANFIS models through the performance criteria, the ANFIS model is better than the BPNN model for predicting the DO values. Study results show that the neural network particularly using ANFIS model is able to predict the DO concentrations with reasonable accuracy, suggesting that the neural network is a valuable tool for reservoir management in Taiwan.

  2. Antagonistic neural networks underlying differentiated leadership roles

    OpenAIRE

    Richard Eleftherios Boyatzis; Kylie eRochford; Anthony Ian Jack

    2014-01-01

    The emergence of two distinct leadership roles, the task leader and the socio-emotional leader, has been documented in the leadership literature since the 1950’s. Recent research in neuroscience suggests that the division between task oriented and socio-emotional oriented roles derives from a fundamental feature of our neurobiology: an antagonistic relationship between two large-scale cortical networks -- the Task Positive Network (TPN) and the Default Mode Network (DMN). Neural activity in ...

  3. Representations in neural network based empirical potentials

    Science.gov (United States)

    Cubuk, Ekin D.; Malone, Brad D.; Onat, Berk; Waterland, Amos; Kaxiras, Efthimios

    2017-07-01

    Many structural and mechanical properties of crystals, glasses, and biological macromolecules can be modeled from the local interactions between atoms. These interactions ultimately derive from the quantum nature of electrons, which can be prohibitively expensive to simulate. Machine learning has the potential to revolutionize materials modeling due to its ability to efficiently approximate complex functions. For example, neural networks can be trained to reproduce results of density functional theory calculations at a much lower cost. However, how neural networks reach their predictions is not well understood, which has led to them being used as a "black box" tool. This lack of understanding is not desirable especially for applications of neural networks in scientific inquiry. We argue that machine learning models trained on physical systems can be used as more than just approximations since they had to "learn" physical concepts in order to reproduce the labels they were trained on. We use dimensionality reduction techniques to study in detail the representation of silicon atoms at different stages in a neural network, which provides insight into how a neural network learns to model atomic interactions.

  4. Neural networks to formulate special fats

    Directory of Open Access Journals (Sweden)

    Garcia, R. K.

    2012-09-01

    Full Text Available Neural networks are a branch of artificial intelligence based on the structure and development of biological systems, having as its main characteristic the ability to learn and generalize knowledge. They are used for solving complex problems for which traditional computing systems have a low efficiency. To date, applications have been proposed for different sectors and activities. In the area of fats and oils, the use of neural networks has focused mainly on two issues: the detection of adulteration and the development of fatty products. The formulation of fats for specific uses is the classic case of a complex problem where an expert or group of experts defines the proportions of each base, which, when mixed, provide the specifications for the desired product. Some conventional computer systems are currently available to assist the experts; however, these systems have some shortcomings. This article describes in detail a system for formulating fatty products, shortenings or special fats, from three or more components by using neural networks (MIX. All stages of development, including design, construction, training, evaluation, and operation of the network will be outlined.

    Las redes neuronales son una rama de la inteligencia artificial basadas en la estructura y funcionamiento de sistemas biológicos, teniendo como principal característica la capacidad de aprender y generalizar conocimiento. Estas son utilizadas en la resolución de problemas complejos, en los cuales los sistemas computacionales tradicionales presentan una eficiencia baja. Hasta la fecha, han sido propuestas aplicaciones para los más diversos sectores y actividades. En el área de grasas y aceites, la utilización de redes neuronales se ha concentrado principalmente en dos asuntos: la detección de adulteraciones y la formulación de productos grasos. La formulación de grasas para uso específico es el caso clásico de problema complejo donde un experto o grupo de

  5. The Effects of GABAergic Polarity Changes on Episodic Neural Network Activity in Developing Neural Systems

    Directory of Open Access Journals (Sweden)

    Wilfredo Blanco

    2017-09-01

    Full Text Available Early in development, neural systems have primarily excitatory coupling, where even GABAergic synapses are excitatory. Many of these systems exhibit spontaneous episodes of activity that have been characterized through both experimental and computational studies. As development progress the neural system goes through many changes, including synaptic remodeling, intrinsic plasticity in the ion channel expression, and a transformation of GABAergic synapses from excitatory to inhibitory. What effect each of these, and other, changes have on the network behavior is hard to know from experimental studies since they all happen in parallel. One advantage of a computational approach is that one has the ability to study developmental changes in isolation. Here, we examine the effects of GABAergic synapse polarity change on the spontaneous activity of both a mean field and a neural network model that has both glutamatergic and GABAergic coupling, representative of a developing neural network. We find some intuitive behavioral changes as the GABAergic neurons go from excitatory to inhibitory, shared by both models, such as a decrease in the duration of episodes. We also find some paradoxical changes in the activity that are only present in the neural network model. In particular, we find that during early development the inter-episode durations become longer on average, while later in development they become shorter. In addressing this unexpected finding, we uncover a priming effect that is particularly important for a small subset of neurons, called the “intermediate neurons.” We characterize these neurons and demonstrate why they are crucial to episode initiation, and why the paradoxical behavioral change result from priming of these neurons. The study illustrates how even arguably the simplest of developmental changes that occurs in neural systems can present non-intuitive behaviors. It also makes predictions about neural network behavioral changes

  6. Advances in neural networks computational and theoretical issues

    CERN Document Server

    Esposito, Anna; Morabito, Francesco

    2015-01-01

    This book collects research works that exploit neural networks and machine learning techniques from a multidisciplinary perspective. Subjects covered include theoretical, methodological and computational topics which are grouped together into chapters devoted to the discussion of novelties and innovations related to the field of Artificial Neural Networks as well as the use of neural networks for applications, pattern recognition, signal processing, and special topics such as the detection and recognition of multimodal emotional expressions and daily cognitive functions, and  bio-inspired memristor-based networks.  Providing insights into the latest research interest from a pool of international experts coming from different research fields, the volume becomes valuable to all those with any interest in a holistic approach to implement believable, autonomous, adaptive, and context-aware Information Communication Technologies.

  7. Community structure of complex networks based on continuous neural network

    Science.gov (United States)

    Dai, Ting-ting; Shan, Chang-ji; Dong, Yan-shou

    2017-09-01

    As a new subject, the research of complex networks has attracted the attention of researchers from different disciplines. Community structure is one of the key structures of complex networks, so it is a very important task to analyze the community structure of complex networks accurately. In this paper, we study the problem of extracting the community structure of complex networks, and propose a continuous neural network (CNN) algorithm. It is proved that for any given initial value, the continuous neural network algorithm converges to the eigenvector of the maximum eigenvalue of the network modularity matrix. Therefore, according to the stability of the evolution of the network symbol will be able to get two community structure.

  8. Flexible body control using neural networks

    Science.gov (United States)

    Mccullough, Claire L.

    1992-01-01

    Progress is reported on the control of Control Structures Interaction suitcase demonstrator (a flexible structure) using neural networks and fuzzy logic. It is concluded that while control by neural nets alone (i.e., allowing the net to design a controller with no human intervention) has yielded less than optimal results, the neural net trained to emulate the existing fuzzy logic controller does produce acceptible system responses for the initial conditions examined. Also, a neural net was found to be very successful in performing the emulation step necessary for the anticipatory fuzzy controller for the CSI suitcase demonstrator. The fuzzy neural hybrid, which exhibits good robustness and noise rejection properties, shows promise as a controller for practical flexible systems, and should be further evaluated.

  9. Identification and Position Control of Marine Helm using Artificial Neural Network Neural Network

    Directory of Open Access Journals (Sweden)

    Hui ZHU

    2008-02-01

    Full Text Available If nonlinearities such as saturation of the amplifier gain and motor torque, gear backlash, and shaft compliances- just to name a few - are considered in the position control system of marine helm, traditional control methods are no longer sufficient to be used to improve the performance of the system. In this paper an alternative approach to traditional control methods - a neural network reference controller - is proposed to establish an adaptive control of the position of the marine helm to achieve the controlled variable at the command position. This neural network controller comprises of two neural networks. One is the plant model network used to identify the nonlinear system and the other the controller network used to control the output to follow the reference model. The experimental results demonstrate that this adaptive neural network reference controller has much better control performance than is obtained with traditional controllers.

  10. Analysis of the experimental positron lifetime spectra by neural networks

    Directory of Open Access Journals (Sweden)

    Avdić Senada

    2003-01-01

    Full Text Available This paper deals with the analysis of experimental positron lifetime spectra in polymer materials by using various algorithms of neural networks. A method based on the use of artificial neural networks for unfolding the mean lifetime and intensity of the spectral components of simulated positron lifetime spectra was previously suggested and tested on simulated data [Pžzsitetal, Applied Surface Science, 149 (1998, 97]. In this work, the applicability of the method to the analysis of experimental positron spectra has been verified in the case of spectra from polymer materials with three components. It has been demonstrated that the backpropagation neural network can determine the spectral parameters with a high accuracy and perform the decomposi-tion of lifetimes which differ by 10% or more. The backpropagation network has not been suitable for the identification of both the parameters and the number of spectral components. Therefore, a separate artificial neural network module has been designed to solve the classification problem. Module types based on self-organizing map and learning vector quantization algorithms have been tested. The learning vector quantization algorithm was found to have better performance and reliability. A complete artificial neural network analysis tool of positron lifetime spectra has been constructed to include a spectra classification module and parameter evaluation modules for spectra with a different number of components. In this way, both flexibility and high resolution can be achieved.

  11. Training Deep Spiking Neural Networks Using Backpropagation.

    Science.gov (United States)

    Lee, Jun Haeng; Delbruck, Tobi; Pfeiffer, Michael

    2016-01-01

    Deep spiking neural networks (SNNs) hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise. This enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membrane potentials. Compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statistics of spikes more precisely. We evaluate the proposed framework on artificially generated events from the original MNIST handwritten digit benchmark, and also on the N-MNIST benchmark recorded with an event-based dynamic vision sensor, in which the proposed method reduces the error rate by a factor of more than three compared to the best previous SNN, and also achieves a higher accuracy than a conventional convolutional neural network (CNN) trained and tested on the same data. We demonstrate in the context of the MNIST task that thanks to their event-driven operation, deep SNNs (both fully connected and convolutional) trained with our method achieve accuracy equivalent with conventional neural networks. In the N-MNIST example, equivalent accuracy is achieved with about five times fewer computational operations.

  12. Memory-optimal neural network approximation

    Science.gov (United States)

    Bölcskei, Helmut; Grohs, Philipp; Kutyniok, Gitta; Petersen, Philipp

    2017-08-01

    We summarize the main results of a recent theory-developed by the authors-establishing fundamental lower bounds on the connectivity and memory requirements of deep neural networks as a function of the complexity of the function class to be approximated by the network. These bounds are shown to be achievable. Specifically, all function classes that are optimally approximated by a general class of representation systems-so-called affine systems-can be approximated by deep neural networks with minimal connectivity and memory requirements. Affine systems encompass a wealth of representation systems from applied harmonic analysis such as wavelets, shearlets, ridgelets, α-shearlets, and more generally α-molecules. This result elucidates a remarkable universality property of deep neural networks and shows that they achieve the optimum approximation properties of all affine systems combined. Finally, we present numerical experiments demonstrating that the standard stochastic gradient descent algorithm generates deep neural networks which provide close-to-optimal approximation rates at minimal connectivity. Moreover, stochastic gradient descent is found to actually learn approximations that are sparse in the representation system optimally sparsifying the function class the network is trained on.

  13. Equivalence of Conventional and Modified Network of Generalized Neural Elements

    Directory of Open Access Journals (Sweden)

    E. V. Konovalov

    2016-01-01

    Full Text Available The article is devoted to the analysis of neural networks consisting of generalized neural elements. The first part of the article proposes a new neural network model — a modified network of generalized neural elements (MGNE-network. This network developes the model of generalized neural element, whose formal description contains some flaws. In the model of the MGNE-network these drawbacks are overcome. A neural network is introduced all at once, without preliminary description of the model of a single neural element and method of such elements interaction. The description of neural network mathematical model is simplified and makes it relatively easy to construct on its basis a simulation model to conduct numerical experiments. The model of the MGNE-network is universal, uniting properties of networks consisting of neurons-oscillators and neurons-detectors. In the second part of the article we prove the equivalence of the dynamics of the two considered neural networks: the network, consisting of classical generalized neural elements, and MGNE-network. We introduce the definition of equivalence in the functioning of the generalized neural element and the MGNE-network consisting of a single element. Then we introduce the definition of the equivalence of the dynamics of the two neural networks in general. It is determined the correlation of different parameters of the two considered neural network models. We discuss the issue of matching the initial conditions of the two considered neural network models. We prove the theorem about the equivalence of the dynamics of the two considered neural networks. This theorem allows us to apply all previously obtained results for the networks, consisting of classical generalized neural elements, to the MGNE-network.

  14. Neural networks and particle physics

    CERN Document Server

    Peterson, Carsten

    1993-01-01

    1. Introduction : Structure of the Central Nervous System Generics2. Feed-forward networks, Perceptions, Function approximators3. Self-organisation, Feature Maps4. Feed-back Networks, The Hopfield model, Optimization problems, Feed-back, Networks, Deformable templates, Graph bisection

  15. Thrips (Thysanoptera) identification using artificial neural networks.

    Science.gov (United States)

    Fedor, P; Malenovský, I; Vanhara, J; Sierka, W; Havel, J

    2008-10-01

    We studied the use of a supervised artificial neural network (ANN) model for semi-automated identification of 18 common European species of Thysanoptera from four genera: Aeolothrips Haliday (Aeolothripidae), Chirothrips Haliday, Dendrothrips Uzel, and Limothrips Haliday (all Thripidae). As input data, we entered 17 continuous morphometric and two qualitative two-state characters measured or determined on different parts of the thrips body (head, pronotum, forewing and ovipositor) and the sex. Our experimental data set included 498 thrips specimens. A relatively simple ANN architecture (multilayer perceptrons with a single hidden layer) enabled a 97% correct simultaneous identification of both males and females of all the 18 species in an independent test. This high reliability of classification is promising for a wider application of ANN in the practice of Thysanoptera identification.

  16. Neural networks in support of manned space

    Science.gov (United States)

    Werbos, Paul J.

    1989-01-01

    Many lobbyists in Washington have argued that artificial intelligence (AI) is an alternative to manned space activity. In actuality, this is the opposite of the truth, especially as regards artificial neural networks (ANNs), that form of AI which has the greatest hope of mimicking human abilities in learning, ability to interface with sensors and actuators, flexibility and balanced judgement. ANNs and their relation to expert systems (the more traditional form of AI), and the limitations of both technologies are briefly reviewed. A Few highlights of recent work on ANNs, including an NSF-sponsored workshop on ANNs for control applications are given. Current thinking on ANNs for use in certain key areas (the National Aerospace Plane, teleoperation, the control of large structures, fault diagnostics, and docking) which may be crucial to the long term future of man in space is discussed.

  17. Cotton genotypes selection through artificial neural networks.

    Science.gov (United States)

    Júnior, E G Silva; Cardoso, D B O; Reis, M C; Nascimento, A F O; Bortolin, D I; Martins, M R; Sousa, L B

    2017-09-27

    Breeding programs currently use statistical analysis to assist in the identification of superior genotypes at various stages of a cultivar's development. Differently from these analyses, the computational intelligence approach has been little explored in genetic improvement of cotton. Thus, this study was carried out with the objective of presenting the use of artificial neural networks as auxiliary tools in the improvement of the cotton to improve fiber quality. To demonstrate the applicability of this approach, this research was carried out using the evaluation data of 40 genotypes. In order to classify the genotypes for fiber quality, the artificial neural networks were trained with replicate data of 20 genotypes of cotton evaluated in the harvests of 2013/14 and 2014/15, regarding fiber length, uniformity of length, fiber strength, micronaire index, elongation, short fiber index, maturity index, reflectance degree, and fiber quality index. This quality index was estimated by means of a weighted average on the determined score (1 to 5) of each characteristic of the HVI evaluated, according to its industry standards. The artificial neural networks presented a high capacity of correct classification of the 20 selected genotypes based on the fiber quality index, so that when using fiber length associated with the short fiber index, fiber maturation, and micronaire index, the artificial neural networks presented better results than using only fiber length and previous associations. It was also observed that to submit data of means of new genotypes to the neural networks trained with data of repetition, provides better results of classification of the genotypes. When observing the results obtained in the present study, it was verified that the artificial neural networks present great potential to be used in the different stages of a genetic improvement program of the cotton, aiming at the improvement of the fiber quality of the future cultivars.

  18. Piecewise convexity of artificial neural networks.

    Science.gov (United States)

    Rister, Blaine; Rubin, Daniel L

    2017-10-01

    Although artificial neural networks have shown great promise in applications including computer vision and speech recognition, there remains considerable practical and theoretical difficulty in optimizing their parameters. The seemingly unreasonable success of gradient descent methods in minimizing these non-convex functions remains poorly understood. In this work we offer some theoretical guarantees for networks with piecewise affine activation functions, which have in recent years become the norm. We prove three main results. First, that the network is piecewise convex as a function of the input data. Second, that the network, considered as a function of the parameters in a single layer, all others held constant, is again piecewise convex. Third, that the network as a function of all its parameters is piecewise multi-convex, a generalization of biconvexity. From here we characterize the local minima and stationary points of the training objective, showing that they minimize the objective on certain subsets of the parameter space. We then analyze the performance of two optimization algorithms on multi-convex problems: gradient descent, and a method which repeatedly solves a number of convex sub-problems. We prove necessary convergence conditions for the first algorithm and both necessary and sufficient conditions for the second, after introducing regularization to the objective. Finally, we remark on the remaining difficulty of the global optimization problem. Under the squared error objective, we show that by varying the training data, a single rectifier neuron admits local minima arbitrarily far apart, both in objective value and parameter space. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Artificial neural network in cosmic landscape

    Science.gov (United States)

    Liu, Junyu

    2017-12-01

    In this paper we propose that artificial neural network, the basis of machine learning, is useful to generate the inflationary landscape from a cosmological point of view. Traditional numerical simulations of a global cosmic landscape typically need an exponential complexity when the number of fields is large. However, a basic application of artificial neural network could solve the problem based on the universal approximation theorem of the multilayer perceptron. A toy model in inflation with multiple light fields is investigated numerically as an example of such an application.

  20. Top tagging with deep neural networks [Vidyo

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Recent literature on deep neural networks for top tagging has focussed on image based techniques or multivariate approaches using high level jet substructure variables. Here, we take a sequential approach to this task by using anordered sequence of energy deposits as training inputs. Unlike previous approaches, this strategy does not result in a loss of information during pixelization or the calculation of high level features. We also propose new preprocessing methods that do not alter key physical quantities such as jet mass. We compare the performance of this approach to standard tagging techniques and present results evaluating the robustness of the neural network to pileup.

  1. Pulse image recognition using fuzzy neural network.

    Science.gov (United States)

    Xu, L S; Meng, Max Q -H; Wang, K Q

    2007-01-01

    The automatic recognition of pulse images is the key in the research of computerized pulse diagnosis. In order to automatically differentiate the pulse patterns by using small samples, a fuzzy neural network to classify pulse images based on the knowledge of experts in traditional Chinese pulse diagnosis was designed. The designed classifier can make hard decision and soft decision for identifying 18 patterns of pulse images at the accuracy of 91%, which is better than the results that achieved by back-propagation neural network.

  2. Assessing Landslide Hazard Using Artificial Neural Network

    DEFF Research Database (Denmark)

    Farrokhzad, Farzad; Choobbasti, Asskar Janalizadeh; Barari, Amin

    2011-01-01

    failure" which is main concentration of the current research and "liquefaction failure". Shear failures along shear planes occur when the shear stress along the sliding surfaces exceed the effective shear strength. These slides have been referred to as landslide. An expert system based on artificial...... neural network has been developed for use in the stability evaluation of slopes under various geological conditions and engineering requirements. The Artificial neural network model of this research uses slope characteristics as input and leads to the output in form of the probability of failure...

  3. Neural networks advances and applications 2

    CERN Document Server

    Gelenbe, E

    1992-01-01

    The present volume is a natural follow-up to Neural Networks: Advances and Applications which appeared one year previously. As the title indicates, it combines the presentation of recent methodological results concerning computational models and results inspired by neural networks, and of well-documented applications which illustrate the use of such models in the solution of difficult problems. The volume is balanced with respect to these two orientations: it contains six papers concerning methodological developments and five papers concerning applications and examples illustrating the theoret

  4. Human Face Recognition Using Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Răzvan-Daniel Albu

    2009-10-01

    Full Text Available In this paper, I present a novel hybrid face recognition approach based on a convolutional neural architecture, designed to robustly detect highly variable face patterns. The convolutional network extracts successively larger features in a hierarchical set of layers. With the weights of the trained neural networks there are created kernel windows used for feature extraction in a 3-stage algorithm. I present experimental results illustrating the efficiency of the proposed approach. I use a database of 796 images of 159 individuals from Reims University which contains quite a high degree of variability in expression, pose, and facial details.

  5. SAR ATR Based on Convolutional Neural Network

    Directory of Open Access Journals (Sweden)

    Tian Zhuangzhuang

    2016-06-01

    Full Text Available This study presents a new method of Synthetic Aperture Radar (SAR image target recognition based on a convolutional neural network. First, we introduce a class separability measure into the cost function to improve this network’s ability to distinguish between categories. Then, we extract SAR image features using the improved convolutional neural network and classify these features using a support vector machine. Experimental results using moving and stationary target acquisition and recognition SAR datasets prove the validity of this method.

  6. Neural Network Analysis and Evaluation of the Fetal Heart Rate

    Directory of Open Access Journals (Sweden)

    Yasuaki Noguchi

    2009-01-01

    Full Text Available The aim of the present study is to obtain a highly objective automatic fetal heart rate (FHR diagnosis. The neural network software was composed of three layers with the back propagation, to which 8 FHR data, including sinusoidal FHR, were input and the system was educated by the data of 20 cases with a known outcome. The output was the probability of a normal, intermediate, or pathologic outcome. The neural index studied prolonged monitoring. The neonatal states and the FHR score strongly correlated with the outcome probability. The neural index diagnosis was correct. The completed software was transferred to other computers, where the system function was correct.

  7. Exploiting network redundancy for low-cost neural network realizations.

    NARCIS (Netherlands)

    Keegstra, H; Jansen, WJ; Nijhuis, JAG; Spaanenburg, L; Stevens, H; Udding, JT

    1996-01-01

    A method is presented to optimize a trained neural network for physical realization styles. Target architectures are embedded microcontrollers or standard cell based ASIC designs. The approach exploits the redundancy in the network, required for successful training, to replace the synaptic weighting

  8. Removing Epistemological Bias From Empirical Observation of Neural Networks

    OpenAIRE

    Waldron, Ronan

    1994-01-01

    Also in Proceedings of the International Joint Conference on Neural Networks, Nagoya, Japan. This paper addresses the application of neural network research to a theory of autonomous systems. Neural networks, while enjoying considerable success in autonomous systems applications, have failed to provide a firm theoretical underpinning to neural systems embedded in their natural ecological context. This paper proposes a stochastic formulation of such an embedding. A neural sys...

  9. Steam turbine stress control using NARX neural network

    Science.gov (United States)

    Dominiczak, Krzysztof; Rzadkowski, Romuald; Radulski, Wojciech

    2015-11-01

    Considered here is concept of steam turbine stress control, which is based on Nonlinear AutoRegressive neural networks with eXogenous inputs. Using NARX neural networks,whichwere trained based on experimentally validated FE model allows to control stresses in protected thickwalled steam turbine element with FE model quality. Additionally NARX neural network, which were trained base on FE model, includes: nonlinearity of steam expansion in turbine steam path during transients, nonlinearity of heat exchange inside the turbine during transients and nonlinearity of material properties during transients. In this article NARX neural networks stress controls is shown as an example of HP rotor of 18K390 turbine. HP part thermodynamic model as well as heat exchange model in vicinity of HP rotor,whichwere used in FE model of the HP rotor and the HP rotor FE model itself were validated based on experimental data for real turbine transient events. In such a way it is ensured that NARX neural network behave as real HP rotor during steam turbine transient events.

  10. Proceedings of the Second Joint Technology Workshop on Neural Networks and Fuzzy Logic, volume 2

    Science.gov (United States)

    Lea, Robert N. (Editor); Villarreal, James A. (Editor)

    1991-01-01

    Documented here are papers presented at the Neural Networks and Fuzzy Logic Workshop sponsored by NASA and the University of Texas, Houston. Topics addressed included adaptive systems, learning algorithms, network architectures, vision, robotics, neurobiological connections, speech recognition and synthesis, fuzzy set theory and application, control and dynamics processing, space applications, fuzzy logic and neural network computers, approximate reasoning, and multiobject decision making.

  11. Parameter Identification by Bayes Decision and Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1994-01-01

    The problem of parameter identification by Bayes point estimation using neural networks is investigated.......The problem of parameter identification by Bayes point estimation using neural networks is investigated....

  12. On The Comparison of Artificial Neural Network (ANN) and ...

    African Journals Online (AJOL)

    West African Journal of Industrial and Academic Research ... This work presented the results of an experimental comparison of two models: Multinomial Logistic Regression (MLR) and Artificial Neural Network (ANN) for ... Keywords: Multinomial Logistic Regression, Artificial Neural Network, Correct classification rate.

  13. A NEURAL OSCILLATOR-NETWORK MODEL OF TEMPORAL PATTERN GENERATION

    NARCIS (Netherlands)

    Schomaker, Lambert

    Most contemporary neural network models deal with essentially static, perceptual problems of classification and transformation. Models such as multi-layer feedforward perceptrons generally do not incorporate time as an essential dimension, whereas biological neural networks are inherently temporal

  14. Feature extraction for deep neural networks based on decision boundaries

    Science.gov (United States)

    Woo, Seongyoun; Lee, Chulhee

    2017-05-01

    Feature extraction is a process used to reduce data dimensions using various transforms while preserving the discriminant characteristics of the original data. Feature extraction has been an important issue in pattern recognition since it can reduce the computational complexity and provide a simplified classifier. In particular, linear feature extraction has been widely used. This method applies a linear transform to the original data to reduce the data dimensions. The decision boundary feature extraction method (DBFE) retains only informative directions for discriminating among the classes. DBFE has been applied to various parametric and non-parametric classifiers, which include the Gaussian maximum likelihood classifier (GML), the k-nearest neighbor classifier, support vector machines (SVM) and neural networks. In this paper, we apply DBFE to deep neural networks. This algorithm is based on the nonparametric version of DBFE, which was developed for neural networks. Experimental results with the UCI database show improved classification accuracy with reduced dimensionality.

  15. The extraction of information and knowledge from trained neural networks.

    Science.gov (United States)

    Livingstone, David J; Browne, Antony; Crichton, Raymond; Hudson, Brian D; Whitley, David C; Ford, Martyn G

    2008-01-01

    In the past, neural networks were viewed as classification and regression systems whose internal representations were incomprehensible. It is now becoming apparent that algorithms can be designed that extract comprehensible representations from trained neural networks, enabling them to be used for data mining and knowledge discovery, that is, the discovery and explanation of previously unknown relationships present in data. This chapter reviews existing algorithms for extracting comprehensible representations from neural networks and outlines research to generalize and extend the capabilities of one of these algorithms, TREPAN. This algorithm has been generalized for application to bioinformatics data sets, including the prediction of splice junctions in human DNA sequences, and cheminformatics. The results generated on these data sets are compared with those generated by a conventional data mining technique (C5) and appropriate conclusions are drawn.

  16. Neural network design for J function approximation in dynamic programming

    CERN Document Server

    Pang, X

    1998-01-01

    This paper shows that a new type of artificial neural network (ANN) -- the Simultaneous Recurrent Network (SRN) -- can, if properly trained, solve a difficult function approximation problem which conventional ANNs -- either feedforward or Hebbian -- cannot. This problem, the problem of generalized maze navigation, is typical of problems which arise in building true intelligent control systems using neural networks. (Such systems are discussed in the chapter by Werbos in K.Pribram, Brain and Values, Erlbaum 1998.) The paper provides a general review of other types of recurrent networks and alternative training techniques, including a flowchart of the Error Critic training design, arguable the only plausible approach to explain how the brain adapts time-lagged recurrent systems in real-time. The C code of the test is appended. As in the first tests of backprop, the training here was slow, but there are ways to do better after more experience using this type of network.

  17. Implementation of neural network based non-linear predictive

    DEFF Research Database (Denmark)

    Sørensen, Paul Haase; Nørgård, Peter Magnus; Ravn, Ole

    1998-01-01

    The paper describes a control method for non-linear systems based on generalized predictive control. Generalized predictive control (GPC) was developed to control linear systems including open loop unstable and non-minimum phase systems, but has also been proposed extended for the control of non......-linear systems. GPC is model-based and in this paper we propose the use of a neural network for the modeling of the system. Based on the neural network model a controller with extended control horizon is developed and the implementation issues are discussed, with particular emphasis on an efficient Quasi......-Newton optimization algorithm. The performance is demonstrated on a pneumatic servo system....

  18. Hand gesture recognition based on convolutional neural networks

    Science.gov (United States)

    Hu, Yu-lu; Wang, Lian-ming

    2017-11-01

    Hand gesture has been considered a natural, intuitive and less intrusive way for Human-Computer Interaction (HCI). Although many algorithms for hand gesture recognition have been proposed in literature, robust algorithms have been pursued. A recognize algorithm based on the convolutional neural networks is proposed to recognize ten kinds of hand gestures, which include rotation and turnover samples acquired from different persons. When 6000 hand gesture images were used as training samples, and 1100 as testing samples, a 98% recognition rate was achieved with the convolutional neural networks, which is higher than that with some other frequently-used recognition algorithms.

  19. Neural network for sonogram gap filling

    DEFF Research Database (Denmark)

    Klebæk, Henrik; Jensen, Jørgen Arendt; Hansen, Lars Kai

    1995-01-01

    a neural network for predicting mean frequency of the velocity signal and its variance. The neural network then predicts the evolution of the mean and variance in the gaps, and the sonogram and audio signal are reconstructed from these. The technique is applied on in-vivo data from the carotid artery...... in the sonogram and in the audio signal, rendering the audio signal useless, thus making diagnosis difficult. The current goal for ultrasound scanners is to maintain a high refresh rate for the B-mode image and at the same time attain a high maximum velocity in the sonogram display. This precludes the intermixing...... series, and is shown to yield better results, i.e., the variances of the predictions are lower. The ability of the neural predictor to reconstruct both the sonogram and the audio signal, when only 50% of the time is used for velocity data acquisition, is demonstrated for the in-vivo data...

  20. Digital Neural Networks for New Media

    Science.gov (United States)

    Spaanenburg, Lambert; Malki, Suleyman

    Neural Networks perform computationally intensive tasks offering smart solutions for many new media applications. A number of analog and mixed digital/analog implementations have been proposed to smooth the algorithmic gap. But gradually, the digital implementation has become feasible, and the dedicated neural processor is on the horizon. A notable example is the Cellular Neural Network (CNN). The analog direction has matured for low-power, smart vision sensors; the digital direction is gradually being shaped into an IP-core for algorithm acceleration, especially for use in FPGA-based high-performance systems. The chapter discusses the next step towards a flexible and scalable multi-core engine using Application-Specific Integrated Processors (ASIP). This topographic engine can serve many new media tasks, as illustrated by novel applications in Homeland Security. We conclude with a view on the CNN kaleidoscope for the year 2020.

  1. Dynamic Object Identification with SOM-based neural networks

    Directory of Open Access Journals (Sweden)

    Aleksey Averkin

    2014-03-01

    Full Text Available In this article a number of neural networks based on self-organizing maps, that can be successfully used for dynamic object identification, is described. Unique SOM-based modular neural networks with vector quantized associative memory and recurrent self-organizing maps as modules are presented. The structured algorithms of learning and operation of such SOM-based neural networks are described in details, also some experimental results and comparison with some other neural networks are given.

  2. Stock Price Prediction Based on Procedural Neural Networks

    OpenAIRE

    Jiuzhen Liang; Wei Song; Mei Wang

    2011-01-01

    We present a spatiotemporal model, namely, procedural neural networks for stock price prediction. Compared with some successful traditional models on simulating stock market, such as BNN (backpropagation neural networks, HMM (hidden Markov model) and SVM (support vector machine)), the procedural neural network model processes both spacial and temporal information synchronously without slide time window, which is typically used in the well-known recurrent neural networks. Two differen...

  3. Parameter estimation using compensatory neural networks

    Indian Academy of Sciences (India)

    Proposed here is a new neuron model, a basis for Compensatory Neural Network Architecture (CNNA), which not only reduces the total number of interconnections among neurons but also reduces the total computing time for training. The suggested model has properties of the basic neuron model as well as the higher ...

  4. Based on BP Neural Network Stock Prediction

    Science.gov (United States)

    Liu, Xiangwei; Ma, Xin

    2012-01-01

    The stock market has a high profit and high risk features, on the stock market analysis and prediction research has been paid attention to by people. Stock price trend is a complex nonlinear function, so the price has certain predictability. This article mainly with improved BP neural network (BPNN) to set up the stock market prediction model, and…

  5. Epileptiform spike detection via convolutional neural networks

    DEFF Research Database (Denmark)

    Johansen, Alexander Rosenberg; Jin, Jing; Maszczyk, Tomasz

    2016-01-01

    The EEG of epileptic patients often contains sharp waveforms called "spikes", occurring between seizures. Detecting such spikes is crucial for diagnosing epilepsy. In this paper, we develop a convolutional neural network (CNN) for detecting spikes in EEG of epileptic patients in an automated...

  6. Artificial neural networks and support vector mac

    Indian Academy of Sciences (India)

    Quantitative structure-property relationships of electroluminescent materials: Artificial neural networks and support vector machines to predict electroluminescence of organic molecules. ALANA FERNANDES GOLIN and RICARDO STEFANI. ∗. Laboratório de Estudos de Materiais (LEMAT), Instituto de Ciências Exatas e da ...

  7. Neural Networks for protein Structure Prediction

    DEFF Research Database (Denmark)

    Bohr, Henrik

    1998-01-01

    This is a review about neural network applications in bioinformatics. Especially the applications to protein structure prediction, e.g. prediction of secondary structures, prediction of surface structure, fold class recognition and prediction of the 3-dimensional structure of protein backbones...

  8. Towards semen quality assessment using neural networks

    DEFF Research Database (Denmark)

    Linneberg, Christian; Salamon, P.; Svarer, C.

    1994-01-01

    The paper presents the methodology and results from a neural net based classification of human sperm head morphology. The methodology uses a preprocessing scheme in which invariant Fourier descriptors are lumped into “energy” bands. The resulting networks are pruned using optimal brain damage...

  9. Convolutional Neural Networks for SAR Image Segmentation

    DEFF Research Database (Denmark)

    Malmgren-Hansen, David; Nobel-Jørgensen, Morten

    2015-01-01

    Segmentation of Synthetic Aperture Radar (SAR) images has several uses, but it is a difficult task due to a number of properties related to SAR images. In this article we show how Convolutional Neural Networks (CNNs) can easily be trained for SAR image segmentation with good results. Besides...

  10. Convolutional Neural Networks - Generalizability and Interpretations

    DEFF Research Database (Denmark)

    Malmgren-Hansen, David

    from data despite it being limited in amount or context representation. Within Machine Learning this thesis focuses on Convolutional Neural Networks for Computer Vision. The research aims to answer how to explore a model's generalizability to the whole population of data samples and how to interpret...

  11. Visualization of neural networks using saliency maps

    DEFF Research Database (Denmark)

    Mørch, Niels J.S.; Kjems, Ulrik; Hansen, Lars Kai

    1995-01-01

    The saliency map is proposed as a new method for understanding and visualizing the nonlinearities embedded in feedforward neural networks, with emphasis on the ill-posed case, where the dimensionality of the input-field by far exceeds the number of examples. Several levels of approximations...

  12. Separable explanations of neural network decisions

    DEFF Research Database (Denmark)

    Rieger, Laura

    2017-01-01

    Deep Taylor Decomposition is a method used to explain neural network decisions. When applying this method to non-dominant classifications, the resulting explanation does not reflect important features for the chosen classification. We propose that this is caused by the dense layers and propose...

  13. Fast Fingerprint Classification with Deep Neural Network

    DEFF Research Database (Denmark)

    Michelsanti, Daniel; Guichi, Yanis; Ene, Andreea-Daniela

    2017-01-01

    . In this work we evaluate the performance of two pre-trained convolutional neural networks fine-tuned on the NIST SD4 benchmark database. The obtained results show that this approach is comparable with other results in the literature, with the advantage of a fast feature extraction stage....

  14. Empirical generalization assessment of neural network models

    DEFF Research Database (Denmark)

    Larsen, Jan; Hansen, Lars Kai

    1995-01-01

    This paper addresses the assessment of generalization performance of neural network models by use of empirical techniques. We suggest to use the cross-validation scheme combined with a resampling technique to obtain an estimate of the generalization performance distribution of a specific model...

  15. Localizing Tortoise Nests by Neural Networks.

    Directory of Open Access Journals (Sweden)

    Roberto Barbuti

    Full Text Available The goal of this research is to recognize the nest digging activity of tortoises using a device mounted atop the tortoise carapace. The device classifies tortoise movements in order to discriminate between nest digging, and non-digging activity (specifically walking and eating. Accelerometer data was collected from devices attached to the carapace of a number of tortoises during their two-month nesting period. Our system uses an accelerometer and an activity recognition system (ARS which is modularly structured using an artificial neural network and an output filter. For the purpose of experiment and comparison, and with the aim of minimizing the computational cost, the artificial neural network has been modelled according to three different architectures based on the input delay neural network (IDNN. We show that the ARS can achieve very high accuracy on segments of data sequences, with an extremely small neural network that can be embedded in programmable low power devices. Given that digging is typically a long activity (up to two hours, the application of ARS on data segments can be repeated over time to set up a reliable and efficient system, called Tortoise@, for digging activity recognition.

  16. Feature to prototype transition in neural networks

    Science.gov (United States)

    Krotov, Dmitry; Hopfield, John

    Models of associative memory with higher order (higher than quadratic) interactions, and their relationship to neural networks used in deep learning are discussed. Associative memory is conventionally described by recurrent neural networks with dynamical convergence to stable points. Deep learning typically uses feedforward neural nets without dynamics. However, a simple duality relates these two different views when applied to problems of pattern classification. From the perspective of associative memory such models deserve attention because they make it possible to store a much larger number of memories, compared to the quadratic case. In the dual description, these models correspond to feedforward neural networks with one hidden layer and unusual activation functions transmitting the activities of the visible neurons to the hidden layer. These activation functions are rectified polynomials of a higher degree rather than the rectified linear functions used in deep learning. The network learns representations of the data in terms of features for rectified linear functions, but as the power in the activation function is increased there is a gradual shift to a prototype-based representation, the two extreme regimes of pattern recognition known in cognitive psychology. Simons Center for Systems Biology.

  17. Applying Artificial Neural Networks for Face Recognition

    Directory of Open Access Journals (Sweden)

    Thai Hoang Le

    2011-01-01

    Full Text Available This paper introduces some novel models for all steps of a face recognition system. In the step of face detection, we propose a hybrid model combining AdaBoost and Artificial Neural Network (ABANN to solve the process efficiently. In the next step, labeled faces detected by ABANN will be aligned by Active Shape Model and Multi Layer Perceptron. In this alignment step, we propose a new 2D local texture model based on Multi Layer Perceptron. The classifier of the model significantly improves the accuracy and the robustness of local searching on faces with expression variation and ambiguous contours. In the feature extraction step, we describe a methodology for improving the efficiency by the association of two methods: geometric feature based method and Independent Component Analysis method. In the face matching step, we apply a model combining many Neural Networks for matching geometric features of human face. The model links many Neural Networks together, so we call it Multi Artificial Neural Network. MIT + CMU database is used for evaluating our proposed methods for face detection and alignment. Finally, the experimental results of all steps on CallTech database show the feasibility of our proposed model.

  18. drinking water treatment using artificial neural network

    African Journals Online (AJOL)

    ogwueleka

    synaptic weights are used to store the knowledge.” The neural network approach is a branch of artificial intelligence. The ANN is based on a model of the human neurological system that consists of basic computing elements (called neurons) interconnected together (Figure 1). The model used for all classification attempts.

  19. Learning chaotic attractors by neural networks

    NARCIS (Netherlands)

    Bakker, R; Schouten, JC; Giles, CL; Takens, F; van den Bleek, CM

    2000-01-01

    An algorithm is introduced that trains a neural network to identify chaotic dynamics from a single measured time series. During training, the algorithm learns to short-term predict the time series. At the same time a criterion, developed by Diks, van Zwet, Takens, and de Goede (1996) is monitored

  20. Nonlinear Time Series Analysis via Neural Networks

    Science.gov (United States)

    Volná, Eva; Janošek, Michal; Kocian, Václav; Kotyrba, Martin

    This article deals with a time series analysis based on neural networks in order to make an effective forex market [Moore and Roche, J. Int. Econ. 58, 387-411 (2002)] pattern recognition. Our goal is to find and recognize important patterns which repeatedly appear in the market history to adapt our trading system behaviour based on them.

  1. Neural networks, penalty logic and optimality theory

    NARCIS (Netherlands)

    Blutner, R.; Benz, A.; Blutner, R.

    2009-01-01

    Ever since the discovery of neural networks, there has been a controversy between two modes of information processing. On the one hand, symbolic systems have proven indispensable for our understanding of higher intelligence, especially when cognitive domains like language and reasoning are examined.

  2. Image inpainting using a neural network

    Directory of Open Access Journals (Sweden)

    Gapon Nikolay

    2017-01-01

    Full Text Available The paper describes a new method of two-dimensional signals reconstruction by restoring static images. A new method of spatial reconstruction of static images based on a geometric model using a neural network is proposed, it is based on the search for similar blocks and copying them into the region of distorted or missing pixel values.

  3. MBVCNN: Joint convolutional neural networks method for image recognition

    Science.gov (United States)

    Tong, Tong; Mu, Xiaodong; Zhang, Li; Yi, Zhaoxiang; Hu, Pei

    2017-05-01

    Aiming at the problem of objects in image recognition rectangle, but objects which are input into convolutional neural networks square, the object recognition model was put forward which was based on BING method to realize object estimate, used vectorization of convolutional neural networks to realize input square image in convolutional networks, therefore, built joint convolution neural networks, which achieve multiple size image input. Verified by experiments, the accuracy of multi-object image recognition was improved by 6.70% compared with single vectorization of convolutional neural networks. Therefore, image recognition method of joint convolutional neural networks can enhance the accuracy in image recognition, especially for target in rectangular shape.

  4. The role of symmetry in neural networks and their Laplacian spectra.

    Science.gov (United States)

    de Lange, Siemon C; van den Heuvel, Martijn P; de Reus, Marcel A

    2016-11-01

    Human and animal nervous systems constitute complexly wired networks that form the infrastructure for neural processing and integration of information. The organization of these neural networks can be analyzed using the so-called Laplacian spectrum, providing a mathematical tool to produce systems-level network fingerprints. In this article, we examine a characteristic central peak in the spectrum of neural networks, including anatomical brain network maps of the mouse, cat and macaque, as well as anatomical and functional network maps of human brain connectivity. We link the occurrence of this central peak to the level of symmetry in neural networks, an intriguing aspect of network organization resulting from network elements that exhibit similar wiring patterns. Specifically, we propose a measure to capture the global level of symmetry of a network and show that, for both empirical networks and network models, the height of the main peak in the Laplacian spectrum is strongly related to node symmetry in the underlying network. Moreover, examination of spectra of duplication-based model networks shows that neural spectra are best approximated using a trade-off between duplication and diversification. Taken together, our results facilitate a better understanding of neural network spectra and the importance of symmetry in neural networks. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Analysis of neural networks in terms of domain functions

    NARCIS (Netherlands)

    van der Zwaag, B.J.; Slump, Cornelis H.; Spaanenburg, Lambert

    Despite their success-story, artificial neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a "magic tool" but possibly even more as a

  6. Extracting knowledge from supervised neural networks in image processing

    NARCIS (Netherlands)

    van der Zwaag, B.J.; Slump, Cornelis H.; Spaanenburg, Lambert; Jain, R.; Abraham, A.; Faucher, C.; van der Zwaag, B.J.

    Despite their success-story, artificial neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a “magic tool��? but possibly even more as a

  7. neural network based load frequency control for restructuring power

    African Journals Online (AJOL)

    2012-03-01

    Mar 1, 2012 ... Abstract. In this study, an artificial neural network (ANN) application of load frequency control. (LFC) of a Multi-Area power system by using a neural network controller is presented. The comparison between a conventional Proportional Integral (PI) controller and the proposed artificial neural networks ...

  8. Artificial Neural Network Modeling of an Inverse Fluidized Bed ...

    African Journals Online (AJOL)

    The application of neural networks to model a laboratory scale inverse fluidized bed reactor has been studied. A Radial Basis Function neural network has been successfully employed for the modeling of the inverse fluidized bed reactor. In the proposed model, the trained neural network represents the kinetics of biological ...

  9. Time series prediction with simple recurrent neural networks ...

    African Journals Online (AJOL)

    Simple recurrent neural networks are widely used in time series prediction. Most researchers and application developers often choose arbitrarily between Elman or Jordan simple recurrent neural networks for their applications. A hybrid of the two called Elman-Jordan (or Multi-recurrent) neural network is also being used.

  10. Application of radial basis neural network for state estimation of ...

    African Journals Online (AJOL)

    user

    An original application of radial basis function (RBF) neural network for power system state estimation is proposed in this paper. The property of massive parallelism of neural networks is employed for this. The application of RBF neural network for state estimation is investigated by testing its applicability on a IEEE 14 bus ...

  11. New Neural Network Methods for Forecasting Regional Employment

    NARCIS (Netherlands)

    Patuelli, R.; Reggiani, A; Nijkamp, P.; Blien, U.

    2006-01-01

    In this paper, a set of neural network (NN) models is developed to compute short-term forecasts of regional employment patterns in Germany. Neural networks are modern statistical tools based on learning algorithms that are able to process large amounts of data. Neural networks are enjoying

  12. The Artifical Neural Network as means for modeling Nonlinear Systems

    OpenAIRE

    Drábek Oldøich; Taufer Ivan

    1998-01-01

    The paper deals with nonlinear system identification based on neural network. The topic of this publication is simulation of training and testing a neural network. A contribution is assigned to technologists which are good at the clasical identification problems but their knowledges about identification based on neural network are only on the stage of theoretical bases.

  13. The Artifical Neural Network as means for modeling Nonlinear Systems

    Directory of Open Access Journals (Sweden)

    Drábek Oldøich

    1998-12-01

    Full Text Available The paper deals with nonlinear system identification based on neural network. The topic of this publication is simulation of training and testing a neural network. A contribution is assigned to technologists which are good at the clasical identification problems but their knowledges about identification based on neural network are only on the stage of theoretical bases.

  14. Algorithm For A Self-Growing Neural Network

    Science.gov (United States)

    Cios, Krzysztof J.

    1996-01-01

    CID3 algorithm simulates self-growing neural network. Constructs decision trees equivalent to hidden layers of neural network. Based on ID3 algorithm, which dynamically generates decision tree while minimizing entropy of information. CID3 algorithm generates feedforward neural network by use of either crisp or fuzzy measure of entropy.

  15. [Artificial neural networks for decision making in urologic oncology].

    Science.gov (United States)

    Remzi, M; Djavan, B

    2007-06-01

    This chapter presents a detailed introduction regarding Artificial Neural Networks (ANNs) and their contribution to modern Urologic Oncology. It includes a description of ANNs methodology and points out the differences between Artifical Intelligence and traditional statistic models in terms of usefulness for patients and clinicians, and its advantages over current statistical analysis.

  16. A Constructive Neural-Network Approach to Modeling Psychological Development

    Science.gov (United States)

    Shultz, Thomas R.

    2012-01-01

    This article reviews a particular computational modeling approach to the study of psychological development--that of constructive neural networks. This approach is applied to a variety of developmental domains and issues, including Piagetian tasks, shift learning, language acquisition, number comparison, habituation of visual attention, concept…

  17. Neural network based system for script identification in Indian ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    environments. The system developed includes a feature extractor and a modular neural network. The feature extractor consists of two stages. In the first stage ... environments is script/language identification (Muthusamy et al 1994; Hochberg et al 1997). ... In order to take advantage of the learning and generalization abilities ...

  18. Identifying Jets Using Artifical Neural Networks

    Science.gov (United States)

    Rosand, Benjamin; Caines, Helen; Checa, Sofia

    2017-09-01

    We investigate particle jet interactions with the Quark Gluon Plasma (QGP) using artificial neural networks modeled on those used in computer image recognition. We create jet images by binning jet particles into pixels and preprocessing every image. We analyzed the jets with a Multi-layered maxout network and a convolutional network. We demonstrate each network's effectiveness in differentiating simulated quenched jets from unquenched jets, and we investigate the method that the network uses to discriminate among different quenched jet simulations. Finally, we develop a greater understanding of the physics behind quenched jets by investigating what the network learnt as well as its effectiveness in differentiating samples. Yale College Freshman Summer Research Fellowship in the Sciences and Engineering.

  19. Artificial neural networks as quantum associative memory

    Science.gov (United States)

    Hamilton, Kathleen; Schrock, Jonathan; Imam, Neena; Humble, Travis

    We present results related to the recall accuracy and capacity of Hopfield networks implemented on commercially available quantum annealers. The use of Hopfield networks and artificial neural networks as content-addressable memories offer robust storage and retrieval of classical information, however, implementation of these models using currently available quantum annealers faces several challenges: the limits of precision when setting synaptic weights, the effects of spurious spin-glass states and minor embedding of densely connected graphs into fixed-connectivity hardware. We consider neural networks which are less than fully-connected, and also consider neural networks which contain multiple sparsely connected clusters. We discuss the effect of weak edge dilution on the accuracy of memory recall, and discuss how the multiple clique structure affects the storage capacity. Our work focuses on storage of patterns which can be embedded into physical hardware containing n States Department of Defense and used resources of the Computational Research and Development Programs as Oak Ridge National Laboratory under Contract No. DE-AC0500OR22725 with the U. S. Department of Energy.

  20. Computationally Efficient Neural Network Intrusion Security Awareness

    Energy Technology Data Exchange (ETDEWEB)

    Todd Vollmer; Milos Manic

    2009-08-01

    An enhanced version of an algorithm to provide anomaly based intrusion detection alerts for cyber security state awareness is detailed. A unique aspect is the training of an error back-propagation neural network with intrusion detection rule features to provide a recognition basis. Network packet details are subsequently provided to the trained network to produce a classification. This leverages rule knowledge sets to produce classifications for anomaly based systems. Several test cases executed on ICMP protocol revealed a 60% identification rate of true positives. This rate matched the previous work, but 70% less memory was used and the run time was reduced to less than 1 second from 37 seconds.

  1. Matrix representation of a Neural Network

    DEFF Research Database (Denmark)

    Christensen, Bjørn Klint

    Processing, by David Rummelhart (Rummelhart 1986) for an easy-to-read introduction. What the paper does explain is how a matrix representation of a neural net allows for a very simple implementation. The matrix representation is introduced in (Rummelhart 1986, chapter 9), but only for a two-layer linear...... network and the feedforward algorithm. This paper develops the idea further to three-layer non-linear networks and the backpropagation algorithm. Figure 1 shows the layout of a three-layer network. There are I input nodes, J hidden nodes and K output nodes all indexed from 0. Bias-node for the hidden...

  2. Reconstruction of periodic signals using neural networks

    Directory of Open Access Journals (Sweden)

    José Danilo Rairán Antolines

    2014-01-01

    Full Text Available In this paper, we reconstruct a periodic signal by using two neural networks. The first network is trained to approximate the period of a signal, and the second network estimates the corresponding coefficients of the signal's Fourier expansion. The reconstruction strategy consists in minimizing the mean-square error via backpro-pagation algorithms over a single neuron with a sine transfer function. Additionally, this paper presents mathematical proof about the quality of the approximation as well as a first modification of the algorithm, which requires less data to reach the same estimation; thus making the algorithm suitable for real-time implementations.

  3. A Topological Perspective of Neural Network Structure

    Science.gov (United States)

    Sizemore, Ann; Giusti, Chad; Cieslak, Matthew; Grafton, Scott; Bassett, Danielle

    The wiring patterns of white matter tracts between brain regions inform functional capabilities of the neural network. Indeed, densely connected and cyclically arranged cognitive systems may communicate and thus perform distinctly. However, previously employed graph theoretical statistics are local in nature and thus insensitive to such global structure. Here we present an investigation of the structural neural network in eight healthy individuals using persistent homology. An extension of homology to weighted networks, persistent homology records both circuits and cliques (all-to-all connected subgraphs) through a repetitive thresholding process, thus perceiving structural motifs. We report structural features found across patients and discuss brain regions responsible for these patterns, finally considering the implications of such motifs in relation to cognitive function.

  4. Tumor Diagnosis Using Backpropagation Neural Network Method

    Science.gov (United States)

    Ma, Lixing; Looney, Carl; Sukuta, Sydney; Bruch, Reinhard; Afanasyeva, Natalia

    1998-05-01

    For characterization of skin cancer, an artificial neural network (ANN) method has been developed to diagnose normal tissue, benign tumor and melanoma. The pattern recognition is based on a three-layer neural network fuzzy learning system. In this study, the input neuron data set is the Fourier Transform infrared (FT-IR)spectrum obtained by a new Fiberoptic Evanescent Wave Fourier Transform Infrared (FEW-FTIR) spectroscopy method in the range of 1480 to 1850 cm-1. Ten input features are extracted from the absorbency values in this region. A single hidden layer of neural nodes with sigmoids activation functions clusters the feature space into small subclasses and the output nodes are separated in different nonconvex classes to permit nonlinear discrimination of disease states. The output is classified as three classes: normal tissue, benign tumor and melanoma. The results obtained from the neural network pattern recognition are shown to be consistent with traditional medical diagnosis. Input features have also been extracted from the absorbency spectra using chemical factor analysis. These abstract features or factors are also used in the classification.

  5. Phase Diagram of Spiking Neural Networks

    Directory of Open Access Journals (Sweden)

    Hamed eSeyed-Allaei

    2015-03-01

    Full Text Available In computer simulations of spiking neural networks, often it is assumed that every two neurons of the network are connected by a probablilty of 2%, 20% of neurons are inhibitory and 80% are excitatory. These common values are based on experiments, observations. but here, I take a different perspective, inspired by evolution. I simulate many networks, each with a different set of parameters, and then I try to figure out what makes the common values desirable by nature. Networks which are configured according to the common values, have the best dynamic range in response to an impulse and their dynamic range is more robust in respect to synaptic weights. In fact, evolution has favored networks of best dynamic range. I present a phase diagram that shows the dynamic ranges of different networks of different parameteres. This phase diagram gives an insight into the space of parameters -- excitatory to inhibitory ratio, sparseness of connections and synaptic weights. It may serve as a guideline to decide about the values of parameters in a simulation of spiking neural network.

  6. Neural-network-based voice-tracking algorithm

    Science.gov (United States)

    Baker, Mary; Stevens, Charise; Chaparro, Brennen; Paschall, Dwayne

    2002-11-01

    A voice-tracking algorithm was developed and tested for the purposes of electronically separating the voice signals of simultaneous talkers. Many individuals suffer from hearing disorders that often inhibit their ability to focus on a single speaker in a multiple speaker environment (the cocktail party effect). Digital hearing aid technology makes it possible to implement complex algorithms for speech processing in both the time and frequency domains. In this work, an average magnitude difference function (AMDF) was performed on mixed voice signals in order to determine the fundamental frequencies present in the signals. A time prediction neural network was trained to recognize normal human voice inflection patterns, including rising, falling, rising-falling, and falling-rising patterns. The neural network was designed to track the fundamental frequency of a single talker based on the training procedure. The output of the neural network can be used to design an active filter for speaker segregation. Tests were done using audio mixing of two to three speakers uttering short phrases. The AMDF function accurately identified the fundamental frequencies present in the signal. The neural network was tested using a single speaker uttering a short sentence. The network accurately tracked the fundamental frequency of the speaker.

  7. Using neural networks for prediction of nuclear parameters

    Energy Technology Data Exchange (ETDEWEB)

    Pereira Filho, Leonidas; Souto, Kelling Cabral, E-mail: leonidasmilenium@hotmail.com, E-mail: kcsouto@bol.com.br [Instituto Federal de Educacao, Ciencia e Tecnologia do Rio de Janeiro (IFRJ), Rio de Janeiro, RJ (Brazil); Machado, Marcelo Dornellas, E-mail: dornemd@eletronuclear.gov.br [Eletrobras Termonuclear S.A. (GCN.T/ELETRONUCLEAR), Rio de Janeiro, RJ (Brazil). Gerencia de Combustivel Nuclear

    2013-07-01

    Dating from 1943, the earliest work on artificial neural networks (ANN), when Warren Mc Cullock and Walter Pitts developed a study on the behavior of the biological neuron, with the goal of creating a mathematical model. Some other work was done until after the 80 witnessed an explosion of interest in ANNs, mainly due to advances in technology, especially microelectronics. Because ANNs are able to solve many problems such as approximation, classification, categorization, prediction and others, they have numerous applications in various areas, including nuclear. Nodal method is adopted as a tool for analyzing core parameters such as boron concentration and pin power peaks for pressurized water reactors. However, this method is extremely slow when it is necessary to perform various core evaluations, for example core reloading optimization. To overcome this difficulty, in this paper a model of Multi-layer Perceptron (MLP) artificial neural network type backpropagation will be trained to predict these values. The main objective of this work is the development of Multi-layer Perceptron (MLP) artificial neural network capable to predict, in very short time, with good accuracy, two important parameters used in the core reloading problem - Boron Concentration and Power Peaking Factor. For the training of the neural networks are provided loading patterns and nuclear data used in cycle 19 of Angra 1 nuclear power plant. Three models of networks are constructed using the same input data and providing the following outputs: 1- Boron Concentration and Power Peaking Factor, 2 - Boron Concentration and 3 - Power Peaking Factor. (author)

  8. Reformulated radial basis neural networks trained by gradient descent.

    Science.gov (United States)

    Karayiannis, N B

    1999-01-01

    This paper presents an axiomatic approach for constructing radial basis function (RBF) neural networks. This approach results in a broad variety of admissible RBF models, including those employing Gaussian RBF's. The form of the RBF's is determined by a generator function. New RBF models can be developed according to the proposed approach by selecting generator functions other than exponential ones, which lead to Gaussian RBF's. This paper also proposes a supervised learning algorithm based on gradient descent for training reformulated RBF neural networks constructed using the proposed approach. A sensitivity analysis of the proposed algorithm relates the properties of RBF's with the convergence of gradient descent learning. Experiments involving a variety of reformulated RBF networks generated by linear and exponential generator functions indicate that gradient descent learning is simple, easily implementable, and produces RBF networks that perform considerably better than conventional RBF models trained by existing algorithms.

  9. Polycrystalline-Diamond MEMS Biosensors Including Neural Microelectrode-Arrays.

    Science.gov (United States)

    Varney, Michael W; Aslam, Dean M; Janoudi, Abed; Chan, Ho-Yin; Wang, Donna H

    2011-08-15

    Diamond is a material of interest due to its unique combination of properties, including its chemical inertness and biocompatibility. Polycrystalline diamond (poly-C) has been used in experimental biosensors that utilize electrochemical methods and antigen-antibody binding for the detection of biological molecules. Boron-doped poly-C electrodes have been found to be very advantageous for electrochemical applications due to their large potential window, low background current and noise, and low detection limits (as low as 500 fM). The biocompatibility of poly-C is found to be comparable, or superior to, other materials commonly used for implants, such as titanium and 316 stainless steel. We have developed a diamond-based, neural microelectrode-array (MEA), due to the desirability of poly-C as a biosensor. These diamond probes have been used for in vivo electrical recording and in vitro electrochemical detection. Poly-C electrodes have been used for electrical recording of neural activity. In vitro studies indicate that the diamond probe can detect norepinephrine at a 5 nM level. We propose a combination of diamond micro-machining and surface functionalization for manufacturing diamond pathogen-microsensors.

  10. Polycrystalline-Diamond MEMS Biosensors Including Neural Microelectrode-Arrays

    Directory of Open Access Journals (Sweden)

    Donna H. Wang

    2011-08-01

    Full Text Available Diamond is a material of interest due to its unique combination of properties, including its chemical inertness and biocompatibility. Polycrystalline diamond (poly-C has been used in experimental biosensors that utilize electrochemical methods and antigen-antibody binding for the detection of biological molecules. Boron-doped poly-C electrodes have been found to be very advantageous for electrochemical applications due to their large potential window, low background current and noise, and low detection limits (as low as 500 fM. The biocompatibility of poly-C is found to be comparable, or superior to, other materials commonly used for implants, such as titanium and 316 stainless steel. We have developed a diamond-based, neural microelectrode-array (MEA, due to the desirability of poly-C as a biosensor. These diamond probes have been used for in vivo electrical recording and in vitro electrochemical detection. Poly-C electrodes have been used for electrical recording of neural activity. In vitro studies indicate that the diamond probe can detect norepinephrine at a 5 nM level. We propose a combination of diamond micro-machining and surface functionalization for manufacturing diamond pathogen-microsensors.

  11. Deep Gate Recurrent Neural Network

    Science.gov (United States)

    2016-11-22

    distribution, e.g. a particular book. In this experiment, we use a collection of writings by Nietzsche to train our network. In total, this corpus contains...sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics : Human Language Technologies, pages 142–150...Portland, Oregon, USA, June 2011. Association for Com- putational Linguistics . URL http://www.aclweb.org/anthology/P11-1015. Maja J Matari, Complex

  12. Distorted Character Recognition Via An Associative Neural Network

    Science.gov (United States)

    Messner, Richard A.; Szu, Harold H.

    1987-03-01

    The purpose of this paper is two-fold. First, it is intended to provide some preliminary results of a character recognition scheme which has foundations in on-going neural network architecture modeling, and secondly, to apply some of the neural network results in a real application area where thirty years of effort has had little effect on providing the machine an ability to recognize distorted objects within the same object class. It is the author's belief that the time is ripe to start applying in ernest the results of over twenty years of effort in neural modeling to some of the more difficult problems which seem so hard to solve by conventional means. The character recognition scheme proposed utilizes a preprocessing stage which performs a 2-dimensional Walsh transform of an input cartesian image field, then sequency filters this spectrum into three feature bands. Various features are then extracted and organized into three sets of feature vectors. These vector patterns that are stored and recalled associatively. Two possible associative neural memory models are proposed for further investigation. The first being an outer-product linear matrix associative memory with a threshold function controlling the strength of the output pattern (similar to Kohonen's crosscorrelation approach [1]). The second approach is based upon a modified version of Grossberg's neural architecture [2] which provides better self-organizing properties due to its adaptive nature. Preliminary results of the sequency filtering and feature extraction preprocessing stage and discussion about the use of the proposed neural architectures is included.

  13. Investment Valuation Analysis with Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Hüseyin İNCE

    2017-07-01

    Full Text Available This paper shows that discounted cash flow and net present value, which are traditional investment valuation models, can be combined with artificial neural network model forecasting. The main inputs for the valuation models, such as revenue, costs, capital expenditure, and their growth rates, are heavily related to sector dynamics and macroeconomics. The growth rates of those inputs are related to inflation and exchange rates. Therefore, predicting inflation and exchange rates is a critical issue for the valuation output. In this paper, the Turkish economy’s inflation rate and the exchange rate of USD/TRY are forecast by artificial neural networks and implemented to the discounted cash flow model. Finally, the results are benchmarked with conventional practices.

  14. Evaluating neural networks and artificial intelligence systems

    Science.gov (United States)

    Alberts, David S.

    1994-02-01

    Systems have no intrinsic value in and of themselves, but rather derive value from the contributions they make to the missions, decisions, and tasks they are intended to support. The estimation of the cost-effectiveness of systems is a prerequisite for rational planning, budgeting, and investment documents. Neural network and expert system applications, although similar in their incorporation of a significant amount of decision-making capability, differ from each other in ways that affect the manner in which they can be evaluated. Both these types of systems are, by definition, evolutionary systems, which also impacts their evaluation. This paper discusses key aspects of neural network and expert system applications and their impact on the evaluation process. A practical approach or methodology for evaluating a certain class of expert systems that are particularly difficult to measure using traditional evaluation approaches is presented.

  15. Artificial Neural Network for Displacement Vectors Determination

    Directory of Open Access Journals (Sweden)

    P. Bohmann

    1997-09-01

    Full Text Available An artificial neural network (NN for displacement vectors (DV determination is presented in this paper. DV are computed in areas which are essential for image analysis and computer vision, in areas where are edges, lines, corners etc. These special features are found by edges operators with the following filtration. The filtration is performed by a threshold function. The next step is DV computation by 2D Hamming artificial neural network. A method of DV computation is based on the full search block matching algorithms. The pre-processing (edges finding is the reason why the correlation function is very simple, the process of DV determination needs less computation and the structure of the NN is simpler.

  16. Neural Network Program Package for Prosody Modeling

    Directory of Open Access Journals (Sweden)

    J. Santarius

    2004-04-01

    Full Text Available This contribution describes the programme for one part of theautomatic Text-to-Speech (TTS synthesis. Some experiments (for example[14] documented the considerable improvement of the naturalness ofsynthetic speech, but this approach requires completing the inputfeature values by hand. This completing takes a lot of time for bigfiles. We need to improve the prosody by other approaches which useonly automatically classified features (input parameters. Theartificial neural network (ANN approach is used for the modeling ofprosody parameters. The program package contains all modules necessaryfor the text and speech signal pre-processing, neural network training,sensitivity analysis, result processing and a module for the creationof the input data protocol for Czech speech synthesizer ARTIC [1].

  17. Supervised Sequence Labelling with Recurrent Neural Networks

    CERN Document Server

    Graves, Alex

    2012-01-01

    Supervised sequence labelling is a vital area of machine learning, encompassing tasks such as speech, handwriting and gesture recognition, protein secondary structure prediction and part-of-speech tagging. Recurrent neural networks are powerful sequence learning tools—robust to input noise and distortion, able to exploit long-range contextual information—that would seem ideally suited to such problems. However their role in large-scale sequence labelling systems has so far been auxiliary.    The goal of this book is a complete framework for classifying and transcribing sequential data with recurrent neural networks only. Three main innovations are introduced in order to realise this goal. Firstly, the connectionist temporal classification output layer allows the framework to be trained with unsegmented target sequences, such as phoneme-level speech transcriptions; this is in contrast to previous connectionist approaches, which were dependent on error-prone prior segmentation. Secondly, multidimensional...

  18. Hierarchical Neural Network Structures for Phoneme Recognition

    CERN Document Server

    Vasquez, Daniel; Minker, Wolfgang

    2013-01-01

    In this book, hierarchical structures based on neural networks are investigated for automatic speech recognition. These structures are evaluated on the phoneme recognition task where a  Hybrid Hidden Markov Model/Artificial Neural Network paradigm is used. The baseline hierarchical scheme consists of two levels each which is based on a Multilayered Perceptron. Additionally, the output of the first level serves as a second level input. The computational speed of the phoneme recognizer can be substantially increased by removing redundant information still contained at the first level output. Several techniques based on temporal and phonetic criteria have been investigated to remove this redundant information. The computational time could be reduced by 57% whilst keeping the system accuracy comparable to the baseline hierarchical approach.

  19. Neural Network Solves "Traveling-Salesman" Problem

    Science.gov (United States)

    Thakoor, Anilkumar P.; Moopenn, Alexander W.

    1990-01-01

    Experimental electronic neural network solves "traveling-salesman" problem. Plans round trip of minimum distance among N cities, visiting every city once and only once (without backtracking). This problem is paradigm of many problems of global optimization (e.g., routing or allocation of resources) occuring in industry, business, and government. Applied to large number of cities (or resources), circuits of this kind expected to solve problem faster and more cheaply.

  20. Learning in Neural Networks: VLSI Implementation Strategies

    Science.gov (United States)

    Duong, Tuan Anh

    1995-01-01

    Fully-parallel hardware neural network implementations may be applied to high-speed recognition, classification, and mapping tasks in areas such as vision, or can be used as low-cost self-contained units for tasks such as error detection in mechanical systems (e.g. autos). Learning is required not only to satisfy application requirements, but also to overcome hardware-imposed limitations such as reduced dynamic range of connections.

  1. Convolutional Neural Networks for Font Classification

    OpenAIRE

    Tensmeyer, Chris; Saunders, Daniel; Martinez, Tony

    2017-01-01

    Classifying pages or text lines into font categories aids transcription because single font Optical Character Recognition (OCR) is generally more accurate than omni-font OCR. We present a simple framework based on Convolutional Neural Networks (CNNs), where a CNN is trained to classify small patches of text into predefined font classes. To classify page or line images, we average the CNN predictions over densely extracted patches. We show that this method achieves state-of-the-art performance...

  2. A Dynamic Neural Network Approach to CBM

    Science.gov (United States)

    2011-03-15

    Therefore post-processing is needed to extract the time difference between corresponding events from which to calculate the crankshaft rotational speed...potentially already available from existing sensors (such as a crankshaft timing device) and a Neural Network processor to carry out the calculation . As...files are designated with the “_genmod” suffix. These files were the sources for the training and testing sets and made the extraction process easy

  3. Artificial neural network cardiopulmonary modeling and diagnosis

    Science.gov (United States)

    Kangas, Lars J.; Keller, Paul E.

    1997-01-01

    The present invention is a method of diagnosing a cardiopulmonary condition in an individual by comparing data from a progressive multi-stage test for the individual to a non-linear multi-variate model, preferably a recurrent artificial neural network having sensor fusion. The present invention relies on a cardiovascular model developed from physiological measurements of an individual. Any differences between the modeled parameters and the parameters of an individual at a given time are used for diagnosis.

  4. Identifying Tracks Duplicates via Neural Network

    CERN Document Server

    Sunjerga, Antonio; CERN. Geneva. EP Department

    2017-01-01

    The goal of the project is to study feasibility of state of the art machine learning techniques in track reconstruction. Machine learning techniques provide promising ways to speed up the pattern recognition of tracks by adding more intelligence in the algorithms. Implementation of neural network to process of track duplicates identifying will be discussed. Different approaches are shown and results are compared to method that is currently in use.

  5. Multilingual Text Detection with Nonlinear Neural Network

    Directory of Open Access Journals (Sweden)

    Lin Li

    2015-01-01

    Full Text Available Multilingual text detection in natural scenes is still a challenging task in computer vision. In this paper, we apply an unsupervised learning algorithm to learn language-independent stroke feature and combine unsupervised stroke feature learning and automatically multilayer feature extraction to improve the representational power of text feature. We also develop a novel nonlinear network based on traditional Convolutional Neural Network that is able to detect multilingual text regions in the images. The proposed method is evaluated on standard benchmarks and multilingual dataset and demonstrates improvement over the previous work.

  6. Forecasting Energy Commodity Prices Using Neural Networks

    Directory of Open Access Journals (Sweden)

    Massimo Panella

    2012-01-01

    Full Text Available A new machine learning approach for price modeling is proposed. The use of neural networks as an advanced signal processing tool may be successfully used to model and forecast energy commodity prices, such as crude oil, coal, natural gas, and electricity prices. Energy commodities have shown explosive growth in the last decade. They have become a new asset class used also for investment purposes. This creates a huge demand for better modeling as what occurred in the stock markets in the 1970s. Their price behavior presents unique features causing complex dynamics whose prediction is regarded as a challenging task. The use of a Mixture of Gaussian neural network may provide significant improvements with respect to other well-known models. We propose a computationally efficient learning of this neural network using the maximum likelihood estimation approach to calibrate the parameters. The optimal model is identified using a hierarchical constructive procedure that progressively increases the model complexity. Extensive computer simulations validate the proposed approach and provide an accurate description of commodities prices dynamics.

  7. Flood estimation: a neural network approach

    Energy Technology Data Exchange (ETDEWEB)

    Swain, P.C.; Seshachalam, C.; Umamahesh, N.V. [Regional Engineering Coll., Warangal (India). Water and Environment Div.

    2000-07-01

    The artificial neural network (ANN) approach described in this study aims at predicting the flood flow into a reservoir. This differs from the traditional methods of flow prediction in the sense that it belongs to a class of data driven approaches, where as the traditional methods are model driven. Physical processes influencing the occurrences of streamflow in a river are highly complex, and are very difficult to be modelled by available statistical or deterministic models. ANNs provide model free solutions and hence can be expected to be appropriate in these conditions. Non-linearity, adaptivity, evidential response and fault tolerance are additional properties and capabilities of the neural networks. This paper highlights the applicability of neural networks for predicting daily flood flow taking the Hirakud reservoir on river Mahanadi in Orissa, India as the case study. The correlation between the observed and predicted flows and the relative error are considered to measure the performance of the model. The correlation between the observed and the modelled flows are computed to be 0.9467 in testing phase of the model. (orig.)

  8. Identifying Broadband Rotational Spectra with Neural Networks

    Science.gov (United States)

    Zaleski, Daniel P.; Prozument, Kirill

    2017-06-01

    A typical broadband rotational spectrum may contain several thousand observable transitions, spanning many species. Identifying the individual spectra, particularly when the dynamic range reaches 1,000:1 or even 10,000:1, can be challenging. One approach is to apply automated fitting routines. In this approach, combinations of 3 transitions can be created to form a "triple", which allows fitting of the A, B, and C rotational constants in a Watson-type Hamiltonian. On a standard desktop computer, with a target molecule of interest, a typical AUTOFIT routine takes 2-12 hours depending on the spectral density. A new approach is to utilize machine learning to train a computer to recognize the patterns (frequency spacing and relative intensities) inherit in rotational spectra and to identify the individual spectra in a raw broadband rotational spectrum. Here, recurrent neural networks have been trained to identify different types of rotational spectra and classify them accordingly. Furthermore, early results in applying convolutional neural networks for spectral object recognition in broadband rotational spectra appear promising. Perez et al. "Broadband Fourier transform rotational spectroscopy for structure determination: The water heptamer." Chem. Phys. Lett., 2013, 571, 1-15. Seifert et al. "AUTOFIT, an Automated Fitting Tool for Broadband Rotational Spectra, and Applications to 1-Hexanal." J. Mol. Spectrosc., 2015, 312, 13-21. Bishop. "Neural networks for pattern recognition." Oxford university press, 1995.

  9. Artificial Neural Network Model for Predicting Compressive

    Directory of Open Access Journals (Sweden)

    Salim T. Yousif

    2013-05-01

    Full Text Available   Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature.    The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor  affecting the output of the model.     The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.

  10. Artificial neural network applications in ionospheric studies

    Directory of Open Access Journals (Sweden)

    L. R. Cander

    1998-06-01

    Full Text Available The ionosphere of Earth exhibits considerable spatial changes and has large temporal variability of various timescales related to the mechanisms of creation, decay and transport of space ionospheric plasma. Many techniques for modelling electron density profiles through entire ionosphere have been developed in order to solve the "age-old problem" of ionospheric physics which has not yet been fully solved. A new way to address this problem is by applying artificial intelligence methodologies to current large amounts of solar-terrestrial and ionospheric data. It is the aim of this paper to show by the most recent examples that modern development of numerical models for ionospheric monthly median long-term prediction and daily hourly short-term forecasting may proceed successfully applying the artificial neural networks. The performance of these techniques is illustrated with different artificial neural networks developed to model and predict the temporal and spatial variations of ionospheric critical frequency, f0F2 and Total Electron Content (TEC. Comparisons between results obtained by the proposed approaches and measured f0F2 and TEC data provide prospects for future applications of the artificial neural networks in ionospheric studies.

  11. Improved Extension Neural Network and Its Applications

    Directory of Open Access Journals (Sweden)

    Yu Zhou

    2014-01-01

    Full Text Available Extension neural network (ENN is a new neural network that is a combination of extension theory and artificial neural network (ANN. The learning algorithm of ENN is based on supervised learning algorithm. One of important issues in the field of classification and recognition of ENN is how to achieve the best possible classifier with a small number of labeled training data. Training data selection is an effective approach to solve this issue. In this work, in order to improve the supervised learning performance and expand the engineering application range of ENN, we use a novel data selection method based on shadowed sets to refine the training data set of ENN. Firstly, we use clustering algorithm to label the data and induce shadowed sets. Then, in the framework of shadowed sets, the samples located around each cluster centers (core data and the borders between clusters (boundary data are selected as training data. Lastly, we use selected data to train ENN. Compared with traditional ENN, the proposed improved ENN (IENN has a better performance. Moreover, IENN is independent of the supervised learning algorithms and initial labeled data. Experimental results verify the effectiveness and applicability of our proposed work.

  12. CALIBRATION OF ONLINE ANALYZERS USING NEURAL NETWORKS

    Energy Technology Data Exchange (ETDEWEB)

    Rajive Ganguli; Daniel E. Walsh; Shaohai Yu

    2003-12-05

    Neural networks were used to calibrate an online ash analyzer at the Usibelli Coal Mine, Healy, Alaska, by relating the Americium and Cesium counts to the ash content. A total of 104 samples were collected from the mine, with 47 being from screened coal, and the rest being from unscreened coal. Each sample corresponded to 20 seconds of coal on the running conveyor belt. Neural network modeling used the quick stop training procedure. Therefore, the samples were split into training, calibration and prediction subsets. Special techniques, using genetic algorithms, were developed to representatively split the sample into the three subsets. Two separate approaches were tried. In one approach, the screened and unscreened coal was modeled separately. In another, a single model was developed for the entire dataset. No advantage was seen from modeling the two subsets separately. The neural network method performed very well on average but not individually, i.e. though each prediction was unreliable, the average of a few predictions was close to the true average. Thus, the method demonstrated that the analyzers were accurate at 2-3 minutes intervals (average of 6-9 samples), but not at 20 seconds (each prediction).

  13. UAV Trajectory Modeling Using Neural Networks

    Science.gov (United States)

    Xue, Min

    2017-01-01

    Massive small unmanned aerial vehicles are envisioned to operate in the near future. While there are lots of research problems need to be addressed before dense operations can happen, trajectory modeling remains as one of the keys to understand and develop policies, regulations, and requirements for safe and efficient unmanned aerial vehicle operations. The fidelity requirement of a small unmanned vehicle trajectory model is high because these vehicles are sensitive to winds due to their small size and low operational altitude. Both vehicle control systems and dynamic models are needed for trajectory modeling, which makes the modeling a great challenge, especially considering the fact that manufactures are not willing to share their control systems. This work proposed to use a neural network approach for modelling small unmanned vehicle's trajectory without knowing its control system and bypassing exhaustive efforts for aerodynamic parameter identification. As a proof of concept, instead of collecting data from flight tests, this work used the trajectory data generated by a mathematical vehicle model for training and testing the neural network. The results showed great promise because the trained neural network can predict 4D trajectories accurately, and prediction errors were less than 2:0 meters in both temporal and spatial dimensions.

  14. A neural network model for texture discrimination.

    Science.gov (United States)

    Xing, J; Gerstein, G L

    1993-01-01

    A model of texture discrimination in visual cortex was built using a feedforward network with lateral interactions among relatively realistic spiking neural elements. The elements have various membrane currents, equilibrium potentials and time constants, with action potentials and synapses. The model is derived from the modified programs of MacGregor (1987). Gabor-like filters are applied to overlapping regions in the original image; the neural network with lateral excitatory and inhibitory interactions then compares and adjusts the Gabor amplitudes in order to produce the actual texture discrimination. Finally, a combination layer selects and groups various representations in the output of the network to form the final transformed image material. We show that both texture segmentation and detection of texture boundaries can be represented in the firing activity of such a network for a wide variety of synthetic to natural images. Performance details depend most strongly on the global balance of strengths of the excitatory and inhibitory lateral interconnections. The spatial distribution of lateral connective strengths has relatively little effect. Detailed temporal firing activities of single elements in the lateral connected network were examined under various stimulus conditions. Results show (as in area 17 of cortex) that a single element's response to image features local to its receptive field can be altered by changes in the global context.

  15. Categorization in neural networks and prosopagnosia

    Science.gov (United States)

    Virasoro, M. A.

    1989-12-01

    Prosopagnosia is a syndrome characterized by a generalized difficulty to visually recognize individual patterns among those that are similar, and can therefore be said to belong to the same category. I suggest that the existence of this disfunction may be an important clue for understanding the categorization process in the brain. In this direction the performance of neural networks under random destruction of synapses is analysed. It is found that in almost every network that stores correlated patterns the coding of the discriminating details between individuals inside a class is more sensitive to noise or to random destruction than the coding that distinguishes between classes. It follows that a process of death and/or deterioration at an intermediate level of intensity, even if it acts randomly on the network may lead to a malfunctioning of the network that resembles prosopagnosia.

  16. Predicting Physical Time Series Using Dynamic Ridge Polynomial Neural Networks

    Science.gov (United States)

    Al-Jumeily, Dhiya; Ghazali, Rozaida; Hussain, Abir

    2014-01-01

    Forecasting naturally occurring phenomena is a common problem in many domains of science, and this has been addressed and investigated by many scientists. The importance of time series prediction stems from the fact that it has wide range of applications, including control systems, engineering processes, environmental systems and economics. From the knowledge of some aspects of the previous behaviour of the system, the aim of the prediction process is to determine or predict its future behaviour. In this paper, we consider a novel application of a higher order polynomial neural network architecture called Dynamic Ridge Polynomial Neural Network that combines the properties of higher order and recurrent neural networks for the prediction of physical time series. In this study, four types of signals have been used, which are; The Lorenz attractor, mean value of the AE index, sunspot number, and heat wave temperature. The simulation results showed good improvements in terms of the signal to noise ratio in comparison to a number of higher order and feedforward neural networks in comparison to the benchmarked techniques. PMID:25157950

  17. Word Vectorization Using Relations among Words for Neural Network

    Science.gov (United States)

    Hotta, Hajime; Kittaka, Masanobu; Hagiwara, Masafumi

    In this paper, we propose a new vectorization method for a new generation of computational intelligence including neural networks and natural language processing. In recent years, various techniques of word vectorization have been proposed, many of which rely on the preparation of dictionaries. However, these techniques don't consider the symbol grounding problem for unknown types of data, which is one of the most fundamental issues on artificial intelligence. In order to avoid the symbol-grounding problem, pattern processing based methods, such as neural networks, are often used in various studies on self-directive systems and algorithms, and the merit of neural network is not exception in the natural language processing. The proposed method is a converter from one word input to one real-valued vector, whose algorithm is inspired by neural network architecture. The merits of the method are as follows: (1) the method requires no specific knowledge of linguistics e.g. word classes or grammatical one; (2) the method is a sequence learning technique and it can learn additional knowledge. The experiment showed the efficiency of word vectorization in terms of similarity measurement.

  18. Unsupervised neural networks for solving Troesch's problem

    Science.gov (United States)

    Muhammad, Asif Zahoor Raja

    2014-01-01

    In this study, stochastic computational intelligence techniques are presented for the solution of Troesch's boundary value problem. The proposed stochastic solvers use the competency of a feed-forward artificial neural network for mathematical modeling of the problem in an unsupervised manner, whereas the learning of unknown parameters is made with local and global optimization methods as well as their combinations. Genetic algorithm (GA) and pattern search (PS) techniques are used as the global search methods and the interior point method (IPM) is used for an efficient local search. The combination of techniques like GA hybridized with IPM (GA-IPM) and PS hybridized with IPM (PS-IPM) are also applied to solve different forms of the equation. A comparison of the proposed results obtained from GA, PS, IPM, PS-IPM and GA-IPM has been made with the standard solutions including well known analytic techniques of the Adomian decomposition method, the variational iterational method and the homotopy perturbation method. The reliability and effectiveness of the proposed schemes, in term of accuracy and convergence, are evaluated from the results of statistical analysis based on sufficiently large independent runs.

  19. Neural network system for traffic flow management

    Science.gov (United States)

    Gilmore, John F.; Elibiary, Khalid J.; Petersson, L. E. Rickard

    1992-09-01

    Atlanta will be the home of several special events during the next five years ranging from the 1996 Olympics to the 1994 Super Bowl. When combined with the existing special events (Braves, Falcons, and Hawks games, concerts, festivals, etc.), the need to effectively manage traffic flow from surface streets to interstate highways is apparent. This paper describes a system for traffic event response and management for intelligent navigation utilizing signals (TERMINUS) developed at Georgia Tech for adaptively managing special event traffic flows in the Atlanta, Georgia area. TERMINUS (the original name given Atlanta, Georgia based upon its role as a rail line terminating center) is an intelligent surface street signal control system designed to manage traffic flow in Metro Atlanta. The system consists of three components. The first is a traffic simulation of the downtown Atlanta area around Fulton County Stadium that models the flow of traffic when a stadium event lets out. Parameters for the surrounding area include modeling for events during various times of day (such as rush hour). The second component is a computer graphics interface with the simulation that shows the traffic flows achieved based upon intelligent control system execution. The final component is the intelligent control system that manages surface street light signals based upon feedback from control sensors that dynamically adapt the intelligent controller's decision making process. The intelligent controller is a neural network model that allows TERMINUS to control the configuration of surface street signals to optimize the flow of traffic away from special events.

  20. HAWC Energy Reconstruction via Neural Network

    Science.gov (United States)

    Marinelli, Samuel; HAWC Collaboration

    2016-03-01

    The High-Altitude Water-Cherenkov (HAWC) γ-ray observatory is located at 4100 m above sea level on the Sierra Negra mountain in the state of Puebla, Mexico. Its 300 water-filled tanks are instrumented with PMTs that detect Cherenkov light produced by charged particles in atmospheric air showers induced by TeV γ-rays. The detector became fully operational in March of 2015. With a 2-sr field of view and duty cycle exceeding 90%, HAWC is a survey instrument sensitive to diverse γ-ray sources, including supernova remnants, pulsar wind nebulae, active galactic nuclei, and others. Particle-acceleration mechanisms at these sources can be inferred by studying their energy spectra, particularly at high energies. We have developed a technique for estimating primary- γ-ray energies using an artificial neural network (ANN). Input variables to the ANN are selected to characterize shower multiplicity in the detector, the fraction of the shower contained in the detector, and atmospheric attenuation of the shower. Monte Carlo simulations show that the new estimator has superior performance to the current estimator used in HAWC publications. This work was supported by the National Science Foundation.

  1. Multiscale Convolutional Neural Networks for Hand Detection

    Directory of Open Access Journals (Sweden)

    Shiyang Yan

    2017-01-01

    Full Text Available Unconstrained hand detection in still images plays an important role in many hand-related vision problems, for example, hand tracking, gesture analysis, human action recognition and human-machine interaction, and sign language recognition. Although hand detection has been extensively studied for decades, it is still a challenging task with many problems to be tackled. The contributing factors for this complexity include heavy occlusion, low resolution, varying illumination conditions, different hand gestures, and the complex interactions between hands and objects or other hands. In this paper, we propose a multiscale deep learning model for unconstrained hand detection in still images. Deep learning models, and deep convolutional neural networks (CNNs in particular, have achieved state-of-the-art performances in many vision benchmarks. Developed from the region-based CNN (R-CNN model, we propose a hand detection scheme based on candidate regions generated by a generic region proposal algorithm, followed by multiscale information fusion from the popular VGG16 model. Two benchmark datasets were applied to validate the proposed method, namely, the Oxford Hand Detection Dataset and the VIVA Hand Detection Challenge. We achieved state-of-the-art results on the Oxford Hand Detection Dataset and had satisfactory performance in the VIVA Hand Detection Challenge.

  2. A linear model for characterization of synchronization frequencies of neural networks.

    Science.gov (United States)

    Lv, Peili; Hu, Xintao; Lv, Jinglei; Han, Junwei; Guo, Lei; Liu, Tianming

    2014-02-01

    The synchronization frequency of neural networks and its dynamics have important roles in deciphering the working mechanisms of the brain. It has been widely recognized that the properties of functional network synchronization and its dynamics are jointly determined by network topology, network connection strength, i.e., the connection strength of different edges in the network, and external input signals, among other factors. However, mathematical and computational characterization of the relationships between network synchronization frequency and these three important factors are still lacking. This paper presents a novel computational simulation framework to quantitatively characterize the relationships between neural network synchronization frequency and network attributes and input signals. Specifically, we constructed a series of neural networks including simulated small-world networks, real functional working memory network derived from functional magnetic resonance imaging, and real large-scale structural brain networks derived from diffusion tensor imaging, and performed synchronization simulations on these networks via the Izhikevich neuron spiking model. Our experiments demonstrate that both of the network synchronization strength and synchronization frequency change according to the combination of input signal frequency and network self-synchronization frequency. In particular, our extensive experiments show that the network synchronization frequency can be represented via a linear combination of the network self-synchronization frequency and the input signal frequency. This finding could be attributed to an intrinsically-preserved principle in different types of neural systems, offering novel insights into the working mechanism of neural systems.

  3. Evolutionary Algorithms For Neural Networks Binary And Real Data Classification

    Directory of Open Access Journals (Sweden)

    Dr. Hanan A.R. Akkar

    2015-08-01

    Full Text Available Artificial neural networks are complex networks emulating the way human rational neurons process data. They have been widely used generally in prediction clustering classification and association. The training algorithms that used to determine the network weights are almost the most important factor that influence the neural networks performance. Recently many meta-heuristic and Evolutionary algorithms are employed to optimize neural networks weights to achieve better neural performance. This paper aims to use recently proposed algorithms for optimizing neural networks weights comparing these algorithms performance with other classical meta-heuristic algorithms used for the same purpose. However to evaluate the performance of such algorithms for training neural networks we examine such algorithms to classify four opposite binary XOR clusters and classification of continuous real data sets such as Iris and Ecoli.

  4. Runoff Modelling in Urban Storm Drainage by Neural Networks

    DEFF Research Database (Denmark)

    Rasmussen, Michael R.; Brorsen, Michael; Schaarup-Jensen, Kjeld

    1995-01-01

    A neural network is used to simulate folw and water levels in a sewer system. The calibration of th neural network is based on a few measured events and the network is validated against measureed events as well as flow simulated with the MOUSE model (Lindberg and Joergensen, 1986). The neural...... network is used to compute flow or water level at selected points in the sewer system, and to forecast the flow from a small residential area. The main advantages of the neural network are the build-in self calibration procedure and high speed performance, but the neural network cannot be used to extract...... knowledge of the runoff process. The neural network was found to simulate 150 times faster than e.g. the MOUSE model....

  5. Decorrelated Jet Substructure Tagging using Adversarial Neural Networks

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    We describe a strategy for constructing a neural network jet substructure tagger which powerfully discriminates boosted decay signals while remaining largely uncorrelated with the jet mass. This reduces the impact of systematic uncertainties in background modeling while enhancing signal purity, resulting in improved discovery significance relative to existing taggers. The network is trained using an adversarial strategy, resulting in a tagger that learns to balance classification accuracy with decorrelation. As a benchmark scenario, we consider the case where large-radius jets originating from a boosted Z' decay are discriminated from a background of nonresonant quark and gluon jets. We show that in the presence of systematic uncertainties on the background rate, our adversarially-trained, decorrelated tagger considerably outperforms a conventionally trained neural network, despite having a slightly worse signal-background separation power. We generalize the adversarial training technique to include a paramet...

  6. Network traffic anomaly prediction using Artificial Neural Network

    Science.gov (United States)

    Ciptaningtyas, Hening Titi; Fatichah, Chastine; Sabila, Altea

    2017-03-01

    As the excessive increase of internet usage, the malicious software (malware) has also increase significantly. Malware is software developed by hacker for illegal purpose(s), such as stealing data and identity, causing computer damage, or denying service to other user[1]. Malware which attack computer or server often triggers network traffic anomaly phenomena. Based on Sophos's report[2], Indonesia is the riskiest country of malware attack and it also has high network traffic anomaly. This research uses Artificial Neural Network (ANN) to predict network traffic anomaly based on malware attack in Indonesia which is recorded by Id-SIRTII/CC (Indonesia Security Incident Response Team on Internet Infrastructure/Coordination Center). The case study is the highest malware attack (SQL injection) which has happened in three consecutive years: 2012, 2013, and 2014[4]. The data series is preprocessed first, then the network traffic anomaly is predicted using Artificial Neural Network and using two weight update algorithms: Gradient Descent and Momentum. Error of prediction is calculated using Mean Squared Error (MSE) [7]. The experimental result shows that MSE for SQL Injection is 0.03856. So, this approach can be used to predict network traffic anomaly.

  7. Neural network controller for underwater work ROV. Suichu sagyoyo ROV no neural network controller

    Energy Technology Data Exchange (ETDEWEB)

    Yoshida, Y.; Kidoshi, H.; Arahata, M.; Shoji, K.; Takahashi, Y. (Ishikawajima-Harima Heavy Industries, Co. Ltd., Tokyo (Japan))

    1993-07-01

    The previous underwater work ROV (remotely operated vehicle) has been controlled manually because its dynamic properties are changeable underwater. Ishikawajima-Harima Heavy Industries (IHI) has applied a neural network to an adaptive controller for the ROV. This paper describes objectives of the research, design of control logic, and tank experiments on a model ROV. For the neural network, manual operation was used to provide the initial learning data for the neural network in order to initialize control parameters for optimization. The model ROV was designed to achieve and maintain constant depth in normal operation. As a consequence of the tank experiments, it was demonstrated that the controller can acquire skill of operators, can further improve the acquired skill of operators, and can construct an automatic control system autonomically even if any dynamic properties are not known. 6 refs., 8 figs.

  8. Marginalization in Random Nonlinear Neural Networks

    Science.gov (United States)

    Vasudeva Raju, Rajkumar; Pitkow, Xaq

    2015-03-01

    Computations involved in tasks like causal reasoning in the brain require a type of probabilistic inference known as marginalization. Marginalization corresponds to averaging over irrelevant variables to obtain the probability of the variables of interest. This is a fundamental operation that arises whenever input stimuli depend on several variables, but only some are task-relevant. Animals often exhibit behavior consistent with marginalizing over some variables, but the neural substrate of this computation is unknown. It has been previously shown (Beck et al. 2011) that marginalization can be performed optimally by a deterministic nonlinear network that implements a quadratic interaction of neural activity with divisive normalization. We show that a simpler network can perform essentially the same computation. These Random Nonlinear Networks (RNN) are feedforward networks with one hidden layer, sigmoidal activation functions, and normally-distributed weights connecting the input and hidden layers. We train the output weights connecting the hidden units to an output population, such that the output model accurately represents a desired marginal probability distribution without significant information loss compared to optimal marginalization. Simulations for the case of linear coordinate transformations show that the RNN model has good marginalization performance, except for highly uncertain inputs that have low amplitude population responses. Behavioral experiments, based on these results, could then be used to identify if this model does indeed explain how the brain performs marginalization.

  9. Neural Network Model of memory retrieval

    Directory of Open Access Journals (Sweden)

    Stefano eRecanatesi

    2015-12-01

    Full Text Available Human memory can store large amount of information. Nevertheless, recalling is often achallenging task. In a classical free recall paradigm, where participants are asked to repeat abriefly presented list of words, people make mistakes for lists as short as 5 words. We present amodel for memory retrieval based on a Hopfield neural network where transition between itemsare determined by similarities in their long-term memory representations. Meanfield analysis ofthe model reveals stable states of the network corresponding (1 to single memory representationsand (2 intersection between memory representations. We show that oscillating feedback inhibitionin the presence of noise induces transitions between these states triggering the retrieval ofdifferent memories. The network dynamics qualitatively predicts the distribution of time intervalsrequired to recall new memory items observed in experiments. It shows that items having largernumber of neurons in their representation are statistically easier to recall and reveals possiblebottlenecks in our ability of retrieving memories. Overall, we propose a neural network model ofinformation retrieval broadly compatible with experimental observations and is consistent with ourrecent graphical model (Romani et al., 2013.

  10. Neural Network Control of Asymmetrical Multilevel Converters

    Directory of Open Access Journals (Sweden)

    Patrice WIRA

    2009-12-01

    Full Text Available This paper proposes a neural implementation of a harmonic eliminationstrategy (HES to control a Uniform Step Asymmetrical Multilevel Inverter(USAMI. The mapping between the modulation rate and the requiredswitching angles is learned and approximated with a Multi-Layer Perceptron(MLP neural network. After learning, appropriate switching angles can bedetermined with the neural network leading to a low-computational-costneural controller which is well suited for real-time applications. Thistechnique can be applied to multilevel inverters with any number of levels. Asan example, a nine-level inverter and an eleven-level inverter are consideredand the optimum switching angles are calculated on-line. Comparisons to thewell-known sinusoidal pulse-width modulation (SPWM have been carriedout in order to evaluate the performance of the proposed approach. Simulationresults demonstrate the technical advantages of the proposed neuralimplementation over the conventional method (SPWM in eliminatingharmonics while controlling a nine-level and eleven-level USAMI. Thisneural approach is applied for the supply of an asynchronous machine andresults show that it ensures a highest quality torque by efficiently cancelingthe harmonics generated by the inverters.

  11. Influence of neural adaptation on dynamics and equilibrium state of neural activities in a ring neural network

    Science.gov (United States)

    Takiyama, Ken

    2017-12-01

    How neural adaptation affects neural information processing (i.e. the dynamics and equilibrium state of neural activities) is a central question in computational neuroscience. In my previous works, I analytically clarified the dynamics and equilibrium state of neural activities in a ring-type neural network model that is widely used to model the visual cortex, motor cortex, and several other brain regions. The neural dynamics and the equilibrium state in the neural network model corresponded to a Bayesian computation and statistically optimal multiple information integration, respectively, under a biologically inspired condition. These results were revealed in an analytically tractable manner; however, adaptation effects were not considered. Here, I analytically reveal how the dynamics and equilibrium state of neural activities in a ring neural network are influenced by spike-frequency adaptation (SFA). SFA is an adaptation that causes gradual inhibition of neural activity when a sustained stimulus is applied, and the strength of this inhibition depends on neural activities. I reveal that SFA plays three roles: (1) SFA amplifies the influence of external input in neural dynamics; (2) SFA allows the history of the external input to affect neural dynamics; and (3) the equilibrium state corresponds to the statistically optimal multiple information integration independent of the existence of SFA. In addition, the equilibrium state in a ring neural network model corresponds to the statistically optimal integration of multiple information sources under biologically inspired conditions, independent of the existence of SFA.

  12. Flood routing modelling with Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    R. Peters

    2006-01-01

    Full Text Available For the modelling of the flood routing in the lower reaches of the Freiberger Mulde river and its tributaries the one-dimensional hydrodynamic modelling system HEC-RAS has been applied. Furthermore, this model was used to generate a database to train multilayer feedforward networks. To guarantee numerical stability for the hydrodynamic modelling of some 60 km of streamcourse an adequate resolution in space requires very small calculation time steps, which are some two orders of magnitude smaller than the input data resolution. This leads to quite high computation requirements seriously restricting the application – especially when dealing with real time operations such as online flood forecasting. In order to solve this problem we tested the application of Artificial Neural Networks (ANN. First studies show the ability of adequately trained multilayer feedforward networks (MLFN to reproduce the model performance.

  13. Granular neural networks, pattern recognition and bioinformatics

    CERN Document Server

    Pal, Sankar K; Ganivada, Avatharam

    2017-01-01

    This book provides a uniform framework describing how fuzzy rough granular neural network technologies can be formulated and used in building efficient pattern recognition and mining models. It also discusses the formation of granules in the notion of both fuzzy and rough sets. Judicious integration in forming fuzzy-rough information granules based on lower approximate regions enables the network to determine the exactness in class shape as well as to handle the uncertainties arising from overlapping regions, resulting in efficient and speedy learning with enhanced performance. Layered network and self-organizing analysis maps, which have a strong potential in big data, are considered as basic modules,. The book is structured according to the major phases of a pattern recognition system (e.g., classification, clustering, and feature selection) with a balanced mixture of theory, algorithm, and application. It covers the latest findings as well as directions for future research, particularly highlighting bioinf...

  14. Quantum generalisation of feedforward neural networks

    Science.gov (United States)

    Wan, Kwok Ho; Dahlsten, Oscar; Kristjánsson, Hlér; Gardner, Robert; Kim, M. S.

    2017-09-01

    We propose a quantum generalisation of a classical neural network. The classical neurons are firstly rendered reversible by adding ancillary bits. Then they are generalised to being quantum reversible, i.e., unitary (the classical networks we generalise are called feedforward, and have step-function activation functions). The quantum network can be trained efficiently using gradient descent on a cost function to perform quantum generalisations of classical tasks. We demonstrate numerically that it can: (i) compress quantum states onto a minimal number of qubits, creating a quantum autoencoder, and (ii) discover quantum communication protocols such as teleportation. Our general recipe is theoretical and implementation-independent. The quantum neuron module can naturally be implemented photonically.

  15. The Usage of Neural Networks for the Medical Diagnosis

    OpenAIRE

    Malyshevska, Kateryna

    2009-01-01

    The problem of cancer diagnosis from multi-channel images using the neural networks is investigated. The goal of this work is to classify the different tissue types which are used to determine the cancer risk. The radial basis function networks and backpropagation neural networks are used for classification. The results of experiments are presented.

  16. Daily Nigerian peak load forecasting using artificial neural network ...

    African Journals Online (AJOL)

    A daily peak load forecasting technique that uses artificial neural network with seasonal indices is presented in this paper. A neural network of relatively smaller size than the main prediction network is used to predict the daily peak load for a period of one year over which the actual daily load data are available using one ...

  17. Prediction of Parametric Roll Resonance by Multilayer Perceptron Neural Network

    DEFF Research Database (Denmark)

    Míguez González, M; López Peña, F.; Díaz Casás, V.

    2011-01-01

    acknowledged in the last few years. This work proposes a prediction system based on a multilayer perceptron (MP) neural network. The training and testing of the MP network is accomplished by feeding it with simulated data of a three degrees-of-freedom nonlinear model of a fishing vessel. The neural network...

  18. Advances in Artificial Neural Networks - Methodological Development and Application

    Science.gov (United States)

    Artificial neural networks as a major soft-computing technology have been extensively studied and applied during the last three decades. Research on backpropagation training algorithms for multilayer perceptron networks has spurred development of other neural network training algorithms for other ne...

  19. Particle swarm optimization of a neural network model in a ...

    Indian Academy of Sciences (India)

    This paper presents a particle swarm optimization (PSO) technique to train an artificial neural network (ANN) for prediction of flank wear in drilling, and compares the network performance with that of the back propagation neural network (BPNN). This analysis is carried out following a series of experiments employing high ...

  20. Survey on Neural Networks Used for Medical Image Processing.

    Science.gov (United States)

    Shi, Zhenghao; He, Lifeng; Suzuki, Kenji; Nakamura, Tsuyoshi; Itoh, Hidenori

    2009-02-01

    This paper aims to present a review of neural networks used in medical image processing. We classify neural networks by its processing goals and the nature of medical images. Main contributions, advantages, and drawbacks of the methods are mentioned in the paper. Problematic issues of neural network application for medical image processing and an outlook for the future research are also discussed. By this survey, we try to answer the following two important questions: (1) What are the major applications of neural networks in medical image processing now and in the nearby future? (2) What are the major strengths and weakness of applying neural networks for solving medical image processing tasks? We believe that this would be very helpful researchers who are involved in medical image processing with neural network techniques.

  1. Permeability prediction in shale gas reservoirs using Neural Network

    Science.gov (United States)

    Aliouane, Leila; Ouadfeul, Sid-Ali

    2017-04-01

    Here, we suggest the use of the artificial neural network for permeability prediction in shale gas reservoirs using artificial neural network. Prediction of Permeability in shale gas reservoirs is a complicated task that requires new models where Darcy's fluid flow model is not suitable. Proposed idea is based on the training of neural network machine using the set of well-logs data as an input and the measured permeability as an output. In this case the Multilayer Perceptron neural network machines is used with Levenberg Marquardt algorithm. Application to two horizontal wells drilled in the Barnett shale formation exhibit the power of neural network model to resolve such as problem. Keywords: Artificial neural network, permeability, prediction , shale gas.

  2. Financial Time Series Prediction Using Elman Recurrent Random Neural Networks

    Science.gov (United States)

    Wang, Jie; Wang, Jun; Fang, Wen; Niu, Hongli

    2016-01-01

    In recent years, financial market dynamics forecasting has been a focus of economic research. To predict the price indices of stock markets, we developed an architecture which combined Elman recurrent neural networks with stochastic time effective function. By analyzing the proposed model with the linear regression, complexity invariant distance (CID), and multiscale CID (MCID) analysis methods and taking the model compared with different models such as the backpropagation neural network (BPNN), the stochastic time effective neural network (STNN), and the Elman recurrent neural network (ERNN), the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices. PMID:27293423

  3. Artificial neural network modeling of p-cresol photodegradation.

    Science.gov (United States)

    Abdollahi, Yadollah; Zakaria, Azmi; Abbasiyannejad, Mina; Masoumi, Hamid Reza Fard; Moghaddam, Mansour Ghaffari; Matori, Khamirul Amin; Jahangirian, Hossein; Keshavarzi, Ashkan

    2013-06-03

    The complexity of reactions and kinetic is the current problem of photodegradation processes. Recently, artificial neural networks have been widely used to solve the problems because of their reliable, robust, and salient characteristics in capturing the non-linear relationships between variables in complex systems. In this study, an artificial neural network was applied for modeling p-cresol photodegradation. To optimize the network, the independent variables including irradiation time, pH, photocatalyst amount and concentration of p-cresol were used as the input parameters, while the photodegradation% was selected as output. The photodegradation% was obtained from the performance of the experimental design of the variables under UV irradiation. The network was trained by Quick propagation (QP) and the other three algorithms as a model. To determine the number of hidden layer nodes in the model, the root mean squared error of testing set was minimized. After minimizing the error, the topologies of the algorithms were compared by coefficient of determination and absolute average deviation. The comparison indicated that the Quick propagation algorithm had minimum root mean squared error, 1.3995, absolute average deviation, 3.0478, and maximum coefficient of determination, 0.9752, for the testing data set. The validation test results of the artificial neural network based on QP indicated that the root mean squared error was 4.11, absolute average deviation was 8.071 and the maximum coefficient of determination was 0.97. Artificial neural network based on Quick propagation algorithm with topology 4-10-1 gave the best performance in this study.

  4. Self-organization in neural networks - Applications in structural optimization

    Science.gov (United States)

    Hajela, Prabhat; Fu, B.; Berke, Laszlo

    1993-01-01

    The present paper discusses the applicability of ART (Adaptive Resonance Theory) networks, and the Hopfield and Elastic networks, in problems of structural analysis and design. A characteristic of these network architectures is the ability to classify patterns presented as inputs into specific categories. The categories may themselves represent distinct procedural solution strategies. The paper shows how this property can be adapted in the structural analysis and design problem. A second application is the use of Hopfield and Elastic networks in optimization problems. Of particular interest are problems characterized by the presence of discrete and integer design variables. The parallel computing architecture that is typical of neural networks is shown to be effective in such problems. Results of preliminary implementations in structural design problems are also included in the paper.

  5. Feedforward Backpropagation Neural Networks in Prediction of Farmer Risk Preferences

    OpenAIRE

    Kastens, Terry L.; Featherstone, Allen M.

    1996-01-01

    An out-of-sample prediction of Kansas farmers' responses to five surveyed questions involving risk is used to compare ordered multinomial logistic regression models with feedforward backpropagation neural network models. Although the logistic models often predict more accurately than the neural network models in a mean-squared error sense, the neural network models are shown to be more accommodating of loss functions associated with a desire to predict certain combinations of categorical resp...

  6. Classification of behavior using unsupervised temporal neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Adair, K.L. [Florida State Univ., Tallahassee, FL (United States). Dept. of Computer Science; Argo, P. [Los Alamos National Lab., NM (United States)

    1998-03-01

    Adding recurrent connections to unsupervised neural networks used for clustering creates a temporal neural network which clusters a sequence of inputs as they appear over time. The model presented combines the Jordan architecture with the unsupervised learning technique Adaptive Resonance Theory, Fuzzy ART. The combination yields a neural network capable of quickly clustering sequential pattern sequences as the sequences are generated. The applicability of the architecture is illustrated through a facility monitoring problem.

  7. Pixel-wise Segmentation of Street with Neural Networks

    OpenAIRE

    Bittel, Sebastian; Kaiser, Vitali; Teichmann, Marvin; Thoma, Martin

    2015-01-01

    Pixel-wise street segmentation of photographs taken from a drivers perspective is important for self-driving cars and can also support other object recognition tasks. A framework called SST was developed to examine the accuracy and execution time of different neural networks. The best neural network achieved an $F_1$-score of 89.5% with a simple feedforward neural network which trained to solve a regression task.

  8. Survey on Neural Networks Used for Medical Image Processing

    OpenAIRE

    Shi, Zhenghao; He, Lifeng; Suzuki, Kenji; Nakamura, Tsuyoshi; Itoh, Hidenori

    2009-01-01

    This paper aims to present a review of neural networks used in medical image processing. We classify neural networks by its processing goals and the nature of medical images. Main contributions, advantages, and drawbacks of the methods are mentioned in the paper. Problematic issues of neural network application for medical image processing and an outlook for the future research are also discussed. By this survey, we try to answer the following two important questions: (1) Wh...

  9. One pass learning for generalized classifier neural network.

    Science.gov (United States)

    Ozyildirim, Buse Melis; Avci, Mutlu

    2016-01-01

    Generalized classifier neural network introduced as a kind of radial basis function neural network, uses gradient descent based optimized smoothing parameter value to provide efficient classification. However, optimization consumes quite a long time and may cause a drawback. In this work, one pass learning for generalized classifier neural network is proposed to overcome this disadvantage. Proposed method utilizes standard deviation of each class to calculate corresponding smoothing parameter. Since different datasets may have different standard deviations and data distributions, proposed method tries to handle these differences by defining two functions for smoothing parameter calculation. Thresholding is applied to determine which function will be used. One of these functions is defined for datasets having different range of values. It provides balanced smoothing parameters for these datasets through logarithmic function and changing the operation range to lower boundary. On the other hand, the other function calculates smoothing parameter value for classes having standard deviation smaller than the threshold value. Proposed method is tested on 14 datasets and performance of one pass learning generalized classifier neural network is compared with that of probabilistic neural network, radial basis function neural network, extreme learning machines, and standard and logarithmic learning generalized classifier neural network in MATLAB environment. One pass learning generalized classifier neural network provides more than a thousand times faster classification than standard and logarithmic generalized classifier neural network. Due to its classification accuracy and speed, one pass generalized classifier neural network can be considered as an efficient alternative to probabilistic neural network. Test results show that proposed method overcomes computational drawback of generalized classifier neural network and may increase the classification performance. Copyright

  10. Neural networks analysis on SSME vibration simulation data

    Science.gov (United States)

    Lo, Ching F.; Wu, Kewei

    1993-01-01

    The neural networks method is applied to investigate the feasibility in detecting anomalies in turbopump vibration of SSME to supplement the statistical method utilized in the prototype system. The investigation of neural networks analysis is conducted using SSME vibration data from a NASA developed numerical simulator. The limited application of neural networks to the HPFTP has also shown the effectiveness in diagnosing the anomalies of turbopump vibrations.

  11. A Neural Network-Based Interval Pattern Matcher

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2015-07-01

    Full Text Available One of the most important roles in the machine learning area is to classify, and neural networks are very important classifiers. However, traditional neural networks cannot identify intervals, let alone classify them. To improve their identification ability, we propose a neural network-based interval matcher in our paper. After summarizing the theoretical construction of the model, we take a simple and a practical weather forecasting experiment, which show that the recognizer accuracy reaches 100% and that is promising.

  12. Discrete Orthogonal Transforms and Neural Networks for Image Interpolation

    Directory of Open Access Journals (Sweden)

    J. Polec

    1999-09-01

    Full Text Available In this contribution we present transform and neural network approaches to the interpolation of images. From transform point of view, the principles from [1] are modified for 1st and 2nd order interpolation. We present several new interpolation discrete orthogonal transforms. From neural network point of view, we present interpolation possibilities of multilayer perceptrons. We use various configurations of neural networks for 1st and 2nd order interpolation. The results are compared by means of tables.

  13. Training product unit neural networks with genetic algorithms

    Science.gov (United States)

    Janson, D. J.; Frenzel, J. F.; Thelen, D. C.

    1991-01-01

    The training of product neural networks using genetic algorithms is discussed. Two unusual neural network techniques are combined; product units are employed instead of the traditional summing units and genetic algorithms train the network rather than backpropagation. As an example, a neural netork is trained to calculate the optimum width of transistors in a CMOS switch. It is shown how local minima affect the performance of a genetic algorithm, and one method of overcoming this is presented.

  14. Wave transmission prediction of multilayer floating breakwater using neural network

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Patil, S.G.; Hegde, A.V.

    in unison to solve a specific problem. The network learns through examples, so it requires good examples to train properly and further a trained network model can be used for prediction purpose. Proceedings of ICOE 2009 Wave transmission... prediction of multilayer floating breakwater using neural network 577 In order to allow the network to learn both non-linear and linear relationships between input nodes and output nodes, multiple-layer neural networks are often used...

  15. An Intelligent Ensemble Neural Network Model for Wind Speed Prediction in Renewable Energy Systems

    Science.gov (United States)

    Ranganayaki, V.; Deepa, S. N.

    2016-01-01

    Various criteria are proposed to select the number of hidden neurons in artificial neural network (ANN) models and based on the criterion evolved an intelligent ensemble neural network model is proposed to predict wind speed in renewable energy applications. The intelligent ensemble neural model based wind speed forecasting is designed by averaging the forecasted values from multiple neural network models which includes multilayer perceptron (MLP), multilayer adaptive linear neuron (Madaline), back propagation neural network (BPN), and probabilistic neural network (PNN) so as to obtain better accuracy in wind speed prediction with minimum error. The random selection of hidden neurons numbers in artificial neural network results in overfitting or underfitting problem. This paper aims to avoid the occurrence of overfitting and underfitting problems. The selection of number of hidden neurons is done in this paper employing 102 criteria; these evolved criteria are verified by the computed various error values. The proposed criteria for fixing hidden neurons are validated employing the convergence theorem. The proposed intelligent ensemble neural model is applied for wind speed prediction application considering the real time wind data collected from the nearby locations. The obtained simulation results substantiate that the proposed ensemble model reduces the error value to minimum and enhances the accuracy. The computed results prove the effectiveness of the proposed ensemble neural network (ENN) model with respect to the considered error factors in comparison with that of the earlier models available in the literature. PMID:27034973

  16. An Intelligent Ensemble Neural Network Model for Wind Speed Prediction in Renewable Energy Systems.

    Science.gov (United States)

    Ranganayaki, V; Deepa, S N

    2016-01-01

    Various criteria are proposed to select the number of hidden neurons in artificial neural network (ANN) models and based on the criterion evolved an intelligent ensemble neural network model is proposed to predict wind speed in renewable energy applications. The intelligent ensemble neural model based wind speed forecasting is designed by averaging the forecasted values from multiple neural network models which includes multilayer perceptron (MLP), multilayer adaptive linear neuron (Madaline), back propagation neural network (BPN), and probabilistic neural network (PNN) so as to obtain better accuracy in wind speed prediction with minimum error. The random selection of hidden neurons numbers in artificial neural network results in overfitting or underfitting problem. This paper aims to avoid the occurrence of overfitting and underfitting problems. The selection of number of hidden neurons is done in this paper employing 102 criteria; these evolved criteria are verified by the computed various error values. The proposed criteria for fixing hidden neurons are validated employing the convergence theorem. The proposed intelligent ensemble neural model is applied for wind speed prediction application considering the real time wind data collected from the nearby locations. The obtained simulation results substantiate that the proposed ensemble model reduces the error value to minimum and enhances the accuracy. The computed results prove the effectiveness of the proposed ensemble neural network (ENN) model with respect to the considered error factors in comparison with that of the earlier models available in the literature.

  17. Parameterizing Stellar Spectra Using Deep Neural Networks

    Science.gov (United States)

    Li, Xiang-Ru; Pan, Ru-Yang; Duan, Fu-Qing

    2017-03-01

    Large-scale sky surveys are observing massive amounts of stellar spectra. The large number of stellar spectra makes it necessary to automatically parameterize spectral data, which in turn helps in statistically exploring properties related to the atmospheric parameters. This work focuses on designing an automatic scheme to estimate effective temperature ({T}{eff}), surface gravity ({log}g) and metallicity [Fe/H] from stellar spectra. A scheme based on three deep neural networks (DNNs) is proposed. This scheme consists of the following three procedures: first, the configuration of a DNN is initialized using a series of autoencoder neural networks; second, the DNN is fine-tuned using a gradient descent scheme; third, three atmospheric parameters {T}{eff}, {log}g and [Fe/H] are estimated using the computed DNNs. The constructed DNN is a neural network with six layers (one input layer, one output layer and four hidden layers), for which the number of nodes in the six layers are 3821, 1000, 500, 100, 30 and 1, respectively. This proposed scheme was tested on both real spectra and theoretical spectra from Kurucz’s new opacity distribution function models. Test errors are measured with mean absolute errors (MAEs). The errors on real spectra from the Sloan Digital Sky Survey (SDSS) are 0.1477, 0.0048 and 0.1129 dex for {log}g, {log}{T}{eff} and [Fe/H] (64.85 K for {T}{eff}), respectively. Regarding theoretical spectra from Kurucz’s new opacity distribution function models, the MAE of the test errors are 0.0182, 0.0011 and 0.0112 dex for {log}g, {log}{T}{eff} and [Fe/H] (14.90 K for {T}{eff}), respectively.

  18. Precipitation Nowcast using Deep Recurrent Neural Network

    Science.gov (United States)

    Akbari Asanjan, A.; Yang, T.; Gao, X.; Hsu, K. L.; Sorooshian, S.

    2016-12-01

    An accurate precipitation nowcast (0-6 hours) with a fine temporal and spatial resolution has always been an important prerequisite for flood warning, streamflow prediction and risk management. Most of the popular approaches used for forecasting precipitation can be categorized into two groups. One type of precipitation forecast relies on numerical modeling of the physical dynamics of atmosphere and another is based on empirical and statistical regression models derived by local hydrologists or meteorologists. Given the recent advances in artificial intelligence, in this study a powerful Deep Recurrent Neural Network, termed as Long Short-Term Memory (LSTM) model, is creatively used to extract the patterns and forecast the spatial and temporal variability of Cloud Top Brightness Temperature (CTBT) observed from GOES satellite. Then, a 0-6 hours precipitation nowcast is produced using a Precipitation Estimation from Remote Sensing Information using Artificial Neural Network (PERSIANN) algorithm, in which the CTBT nowcast is used as the PERSIANN algorithm's raw inputs. Two case studies over the continental U.S. have been conducted that demonstrate the improvement of proposed approach as compared to a classical Feed Forward Neural Network and a couple simple regression models. The advantages and disadvantages of the proposed method are summarized with regard to its capability of pattern recognition through time, handling of vanishing gradient during model learning, and working with sparse data. The studies show that the LSTM model performs better than other methods, and it is able to learn the temporal evolution of the precipitation events through over 1000 time lags. The uniqueness of PERSIANN's algorithm enables an alternative precipitation nowcast approach as demonstrated in this study, in which the CTBT prediction is produced and used as the inputs for generating precipitation nowcast.

  19. Using Artificial Neural Networks to Predict Stock Prices

    OpenAIRE

    Kozdraj, Tomasz

    2009-01-01

    Artificial neural networks constitute one of the most developed conception of artificial intelligence. They are based on pragmatic mathematical theories adopted to tasks resolution. A wide range of their applications also includes financial investments issues. The reason for NN's popularity is mainly connected with their ability to solve complex or not well recognized computational tasks, efficiency in finding solutions as well as the possibility of learning based on patterns or without them....

  20. Advances in Artificial Neural Networks – Methodological Development and Application

    Directory of Open Access Journals (Sweden)

    Yanbo Huang

    2009-08-01

    Full Text Available Artificial neural networks as a major soft-computing technology have been extensively studied and applied during the last three decades. Research on backpropagation training algorithms for multilayer perceptron networks has spurred development of other neural network training algorithms for other networks such as radial basis function, recurrent network, feedback network, and unsupervised Kohonen self-organizing network. These networks, especially the multilayer perceptron network with a backpropagation training algorithm, have gained recognition in research and applications in various scientific and engineering areas. In order to accelerate the training process and overcome data over-fitting, research has been conducted to improve the backpropagation algorithm. Further, artificial neural networks have been integrated with other advanced methods such as fuzzy logic and wavelet analysis, to enhance the ability of data interpretation and modeling and to avoid subjectivity in the operation of the training algorithm. In recent years, support vector machines have emerged as a set of high-performance supervised generalized linear classifiers in parallel with artificial neural networks. A review on development history of artificial neural networks is presented and the standard architectures and algorithms of artificial neural networks are described. Furthermore, advanced artificial neural networks will be introduced with support vector machines, and limitations of ANNs will be identified. The future of artificial neural network development in tandem with support vector machines will be discussed in conjunction with further applications to food science and engineering, soil and water relationship for crop management, and decision support for precision agriculture. Along with the network structures and training algorithms, the applications of artificial neural networks will be reviewed as well, especially in the fields of agricultural and biological

  1. Robustness of the ATLAS pixel clustering neural network algorithm

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00407780; The ATLAS collaboration

    2016-01-01

    Proton-proton collisions at the energy frontier puts strong constraints on track reconstruction algorithms. In the ATLAS track reconstruction algorithm, an artificial neural network is utilised to identify and split clusters of neighbouring read-out elements in the ATLAS pixel detector created by multiple charged particles. The robustness of the neural network algorithm is presented, probing its sensitivity to uncertainties in the detector conditions. The robustness is studied by evaluating the stability of the algorithm's performance under a range of variations in the inputs to the neural networks. Within reasonable variation magnitudes, the neural networks prove to be robust to most variation types.

  2. Decoding small surface codes with feedforward neural networks

    Science.gov (United States)

    Varsamopoulos, Savvas; Criger, Ben; Bertels, Koen

    2018-01-01

    Surface codes reach high error thresholds when decoded with known algorithms, but the decoding time will likely exceed the available time budget, especially for near-term implementations. To decrease the decoding time, we reduce the decoding problem to a classification problem that a feedforward neural network can solve. We investigate quantum error correction and fault tolerance at small code distances using neural network-based decoders, demonstrating that the neural network can generalize to inputs that were not provided during training and that they can reach similar or better decoding performance compared to previous algorithms. We conclude by discussing the time required by a feedforward neural network decoder in hardware.

  3. Material procedure quality forecast based on genetic BP neural network

    Science.gov (United States)

    Zheng, Bao-Hua

    2017-07-01

    Material procedure quality forecast plays an important role in quality control. This paper proposes a prediction model based on genetic algorithm (GA) and back propagation (BP) neural network. It can obtain the initial weights and thresholds of optimized BP neural network with the GA global search ability. A material process quality prediction model with the optimized BP neural network is adopted to predict the error of future process to measure the accuracy of process quality. The results show that the proposed method has the advantages of high accuracy and fast convergence rate compared with BP neural network.

  4. Power converters and AC electrical drives with linear neural networks

    CERN Document Server

    Cirrincione, Maurizio

    2012-01-01

    The first book of its kind, Power Converters and AC Electrical Drives with Linear Neural Networks systematically explores the application of neural networks in the field of power electronics, with particular emphasis on the sensorless control of AC drives. It presents the classical theory based on space-vectors in identification, discusses control of electrical drives and power converters, and examines improvements that can be attained when using linear neural networks. The book integrates power electronics and electrical drives with artificial neural networks (ANN). Organized into four parts,

  5. A hardware implementation of neural network with modified HANNIBAL architecture

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Bum youb; Chung, Duck Jin [Inha University, Inchon (Korea, Republic of)

    1996-03-01

    A digital hardware architecture for artificial neural network with learning capability is described in this paper. It is a modified hardware architecture known as HANNIBAL(Hardware Architecture for Neural Networks Implementing Back propagation Algorithm Learning). For implementing an efficient neural network hardware, we analyzed various type of multiplier which is major function block of neuro-processor cell. With this result, we design a efficient digital neural network hardware using serial/parallel multiplier, and test the operation. We also analyze the hardware efficiency with logic level simulation. (author). 14 refs., 10 figs., 3 tabs.

  6. Neural network and its application to CT imaging

    Energy Technology Data Exchange (ETDEWEB)

    Nikravesh, M.; Kovscek, A.R.; Patzek, T.W. [Lawrence Berkeley National Lab., CA (United States)] [and others

    1997-02-01

    We present an integrated approach to imaging the progress of air displacement by spontaneous imbibition of oil into sandstone. We combine Computerized Tomography (CT) scanning and neural network image processing. The main aspects of our approach are (I) visualization of the distribution of oil and air saturation by CT, (II) interpretation of CT scans using neural networks, and (III) reconstruction of 3-D images of oil saturation from the CT scans with a neural network model. Excellent agreement between the actual images and the neural network predictions is found.

  7. Fuzzy logic, neural networks, and soft computing

    Science.gov (United States)

    Zadeh, Lofti A.

    1994-01-01

    The past few years have witnessed a rapid growth of interest in a cluster of modes of modeling and computation which may be described collectively as soft computing. The distinguishing characteristic of soft computing is that its primary aims are to achieve tractability, robustness, low cost, and high MIQ (machine intelligence quotient) through an exploitation of the tolerance for imprecision and uncertainty. Thus, in soft computing what is usually sought is an approximate solution to a precisely formulated problem or, more typically, an approximate solution to an imprecisely formulated problem. A simple case in point is the problem of parking a car. Generally, humans can park a car rather easily because the final position of the car is not specified exactly. If it were specified to within, say, a few millimeters and a fraction of a degree, it would take hours or days of maneuvering and precise measurements of distance and angular position to solve the problem. What this simple example points to is the fact that, in general, high precision carries a high cost. The challenge, then, is to exploit the tolerance for imprecision by devising methods of computation which lead to an acceptable solution at low cost. By its nature, soft computing is much closer to human reasoning than the traditional modes of computation. At this juncture, the major components of soft computing are fuzzy logic (FL), neural network theory (NN), and probabilistic reasoning techniques (PR), including genetic algorithms, chaos theory, and part of learning theory. Increasingly, these techniques are used in combination to achieve significant improvement in performance and adaptability. Among the important application areas for soft computing are control systems, expert systems, data compression techniques, image processing, and decision support systems. It may be argued that it is soft computing, rather than the traditional hard computing, that should be viewed as the foundation for artificial

  8. Ocean wave forecasting using recurrent neural networks

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Prabaharan, N.

    to the biological neurons, works on the input and output passing through a hidden layer. The ANN used here is a data- oriented modeling technique to find relations between input and output patterns by self learning and without any fixed mathematical form assumed... = 1/p ? Ep (2) Where, Ep = ? ? (Tk ?Ok)2 (3) p is the total number of training patterns; Tk is the actual output and Ok is the predicted output at kth output node. In the learning process of backpropagation neural network...

  9. Convolutional neural networks and face recognition task

    Science.gov (United States)

    Sochenkova, A.; Sochenkov, I.; Makovetskii, A.; Vokhmintsev, A.; Melnikov, A.

    2017-09-01

    Computer vision tasks are remaining very important for the last couple of years. One of the most complicated problems in computer vision is face recognition that could be used in security systems to provide safety and to identify person among the others. There is a variety of different approaches to solve this task, but there is still no universal solution that would give adequate results in some cases. Current paper presents following approach. Firstly, we extract an area containing face, then we use Canny edge detector. On the next stage we use convolutional neural networks (CNN) to finally solve face recognition and person identification task.

  10. Convolution neural networks for ship type recognition

    Science.gov (United States)

    Rainey, Katie; Reeder, John D.; Corelli, Alexander G.

    2016-05-01

    Algorithms to automatically recognize ship type from satellite imagery are desired for numerous maritime applications. This task is difficult, and example imagery accurately labeled with ship type is hard to obtain. Convolutional neural networks (CNNs) have shown promise in image recognition settings, but many of these applications rely on the availability of thousands of example images for training. This work attempts to under- stand for which types of ship recognition tasks CNNs might be well suited. We report the results of baseline experiments applying a CNN to several ship type classification tasks, and discuss many of the considerations that must be made in approaching this problem.

  11. Artificial Neural Network applied to lightning flashes

    Science.gov (United States)

    Gin, R. B.; Guedes, D.; Bianchi, R.

    2013-05-01

    The development of video cameras enabled cientists to study lightning discharges comportment with more precision. The main goal of this project is to create a system able to detect images of lightning discharges stored in videos and classify them using an Artificial Neural Network (ANN)using C Language and OpenCV libraries. The developed system, can be split in two different modules: detection module and classification module. The detection module uses OpenCV`s computer vision libraries and image processing techniques to detect if there are significant differences between frames in a sequence, indicating that something, still not classified, occurred. Whenever there is a significant difference between two consecutive frames, two main algorithms are used to analyze the frame image: brightness and shape algorithms. These algorithms detect both shape and brightness of the event, removing irrelevant events like birds, as well as detecting the relevant events exact position, allowing the system to track it over time. The classification module uses a neural network to classify the relevant events as horizontal or vertical lightning, save the event`s images and calculates his number of discharges. The Neural Network was implemented using the backpropagation algorithm, and was trained with 42 training images , containing 57 lightning events (one image can have more than one lightning). TheANN was tested with one to five hidden layers, with up to 50 neurons each. The best configuration achieved a success rate of 95%, with one layer containing 20 neurons (33 test images with 42 events were used in this phase). This configuration was implemented in the developed system to analyze 20 video files, containing 63 lightning discharges previously manually detected. Results showed that all the lightning discharges were detected, many irrelevant events were unconsidered, and the event's number of discharges was correctly computed. The neural network used in this project achieved a

  12. Defect detection on videos using neural network

    Directory of Open Access Journals (Sweden)

    Sizyakin Roman

    2017-01-01

    Full Text Available In this paper, we consider a method for defects detection in a video sequence, which consists of three main steps; frame compensation, preprocessing by a detector, which is base on the ranking of pixel values, and the classification of all pixels having anomalous values using convolutional neural networks. The effectiveness of the proposed method shown in comparison with the known techniques on several frames of the video sequence with damaged in natural conditions. The analysis of the obtained results indicates the high efficiency of the proposed method. The additional use of machine learning as postprocessing significantly reduce the likelihood of false alarm.

  13. Cultured Neural Networks: Optimization of Patterned Network Adhesiveness and Characterization of their Neural Activity

    Directory of Open Access Journals (Sweden)

    W. L. C. Rutten

    2006-01-01

    Full Text Available One type of future, improved neural interface is the “cultured probe”. It is a hybrid type of neural information transducer or prosthesis, for stimulation and/or recording of neural activity. It would consist of a microelectrode array (MEA on a planar substrate, each electrode being covered and surrounded by a local circularly confined network (“island” of cultured neurons. The main purpose of the local networks is that they act as biofriendly intermediates for collateral sprouts from the in vivo system, thus allowing for an effective and selective neuron–electrode interface. As a secondary purpose, one may envisage future information processing applications of these intermediary networks. In this paper, first, progress is shown on how substrates can be chemically modified to confine developing networks, cultured from dissociated rat cortex cells, to “islands” surrounding an electrode site. Additional coating of neurophobic, polyimide-coated substrate by triblock-copolymer coating enhances neurophilic-neurophobic adhesion contrast. Secondly, results are given on neuronal activity in patterned, unconnected and connected, circular “island” networks. For connected islands, the larger the island diameter (50, 100 or 150 μm, the more spontaneous activity is seen. Also, activity may show a very high degree of synchronization between two islands. For unconnected islands, activity may start at 22 days in vitro (DIV, which is two weeks later than in unpatterned networks.

  14. Sonar discrimination of cylinders from different angles using neural networks neural networks

    DEFF Research Database (Denmark)

    Andersen, Lars Nonboe; Au, Whiwlow; Larsen, Jan

    1999-01-01

    This paper describes an underwater object discrimination system applied to recognize cylinders of various compositions from different angles. The system is based on a new combination of simulated dolphin clicks, simulated auditory filters and artificial neural networks. The model demonstrates its...

  15. PID Neural Network Based Speed Control of Asynchronous Motor Using Programmable Logic Controller

    Directory of Open Access Journals (Sweden)

    MARABA, V. A.

    2011-11-01

    Full Text Available This paper deals with the structure and characteristics of PID Neural Network controller for single input and single output systems. PID Neural Network is a new kind of controller that includes the advantages of artificial neural networks and classic PID controller. Functioning of this controller is based on the update of controller parameters according to the value extracted from system output pursuant to the rules of back propagation algorithm used in artificial neural networks. Parameters obtained from the application of PID Neural Network training algorithm on the speed model of the asynchronous motor exhibiting second order linear behavior were used in the real time speed control of the motor. Programmable logic controller (PLC was used as real time controller. The real time control results show that reference speed successfully maintained under various load conditions.

  16. Regularized negative correlation learning for neural network ensembles.

    Science.gov (United States)

    Chen, Huanhuan; Yao, Xin

    2009-12-01

    Negative correlation learning (NCL) is a neural network ensemble learning algorithm that introduces a correlation penalty term to the cost function of each individual network so that each neural network minimizes its mean square error (MSE) together with the correlation of the ensemble. This paper analyzes NCL and reveals that the training of NCL (when lambda = 1) corresponds to training the entire ensemble as a single learning machine that only minimizes the MSE without regularization. This analysis explains the reason why NCL is prone to overfitting the noise in the training set. This paper also demonstrates that tuning the correlation parameter lambda in NCL by cross validation cannot overcome the overfitting problem. The paper analyzes this problem and proposes the regularized negative correlation learning (RNCL) algorithm which incorporates an additional regularization term for the whole ensemble. RNCL decomposes the ensemble's training objectives, including MSE and regularization, into a set of sub-objectives, and each sub-objective is implemented by an individual neural network. In this paper, we also provide a Bayesian interpretation for RNCL and provide an automatic algorithm to optimize regularization parameters based on Bayesian inference. The RNCL formulation is applicable to any nonlinear estimator minimizing the MSE. The experiments on synthetic as well as real-world data sets demonstrate that RNCL achieves better performance than NCL, especially when the noise level is nontrivial in the data set.

  17. Aspects of the numerical analysis of neural networks

    Science.gov (United States)

    Ellacott, S. W.

    This article starts with a brief introduction to neural networks for those unfamiliar with the basic concepts, together with a very brief overview of mathematical approaches to the subject. This is followed by a more detailed look at three areas of research which are of particular interest to numerical analysts.The first area is approximation theory. If K is a compact set in n, for some n, then it is proved that a semilinear feedforward network with one hidden layer can uniformly approximate any continuous function in C(K) to any required accuracy. A discussion of known results and open questions on the degree of approximation is included. We also consider the relevance of radial basis functions to neural networks.The second area considered is that of learning algorithms. A detailed analysis of one popular algorithm (the delta rule) will be given, indicating why one implementation leads to a stable numerical process, whereas an initially attractive variant (essentially a form of steepest descent) does not. Similar considerations apply to the backpropagation algorithm. The effect of filtering and other preprocessing of the input data will also be discussed systematically.Finally some applications of neural networks to numerical computation are considered.

  18. Characterization of Early Cortical Neural Network ...

    Science.gov (United States)

    We examined the development of neural network activity using microelectrode array (MEA) recordings made in multi-well MEA plates (mwMEAs) over the first 12 days in vitro (DIV). In primary cortical cultures made from postnatal rats, action potential spiking activity was essentially absent on DIV 2 and developed rapidly between DIV 5 and 12. Spiking activity was primarily sporadic and unorganized at early DIV, and became progressively more organized with time in culture, with bursting parameters, synchrony and network bursting increasing between DIV 5 and 12. We selected 12 features to describe network activity and principal components analysis using these features demonstrated a general segregation of data by age at both the well and plate levels. Using a combination of random forest classifiers and Support Vector Machines, we demonstrated that 4 features (CV of within burst ISI, CV of IBI, network spike rate and burst rate) were sufficient to predict the age (either DIV 5, 7, 9 or 12) of each well recording with >65% accuracy. When restricting the classification problem to a binary decision, we found that classification improved dramatically, e.g. 95% accuracy for discriminating DIV 5 vs DIV 12 wells. Further, we present a novel resampling approach to determine the number of wells that might be needed for conducting comparisons of different treatments using mwMEA plates. Overall, these results demonstrate that network development on mwMEA plates is similar to

  19. Stable architectures for deep neural networks

    Science.gov (United States)

    Haber, Eldad; Ruthotto, Lars

    2018-01-01

    Deep neural networks have become invaluable tools for supervised machine learning, e.g. classification of text or images. While often offering superior results over traditional techniques and successfully expressing complicated patterns in data, deep architectures are known to be challenging to design and train such that they generalize well to new data. Critical issues with deep architectures are numerical instabilities in derivative-based learning algorithms commonly called exploding or vanishing gradients. In this paper, we propose new forward propagation techniques inspired by systems of ordinary differential equations (ODE) that overcome this challenge and lead to well-posed learning problems for arbitrarily deep networks. The backbone of our approach is our interpretation of deep learning as a parameter estimation problem of nonlinear dynamical systems. Given this formulation, we analyze stability and well-posedness of deep learning and use this new understanding to develop new network architectures. We relate the exploding and vanishing gradient phenomenon to the stability of the discrete ODE and present several strategies for stabilizing deep learning for very deep networks. While our new architectures restrict the solution space, several numerical experiments show their competitiveness with state-of-the-art networks.

  20. Phase diagram of spiking neural networks.

    Science.gov (United States)

    Seyed-Allaei, Hamed

    2015-01-01

    In computer simulations of spiking neural networks, often it is assumed that every two neurons of the network are connected by a probability of 2%, 20% of neurons are inhibitory and 80% are excitatory. These common values are based on experiments, observations, and trials and errors, but here, I take a different perspective, inspired by evolution, I systematically simulate many networks, each with a different set of parameters, and then I try to figure out what makes the common values desirable. I stimulate networks with pulses and then measure their: dynamic range, dominant frequency of population activities, total duration of activities, maximum rate of population and the occurrence time of maximum rate. The results are organized in phase diagram. This phase diagram gives an insight into the space of parameters - excitatory to inhibitory ratio, sparseness of connections and synaptic weights. This phase diagram can be used to decide the parameters of a model. The phase diagrams show that networks which are configured according to the common values, have a good dynamic range in response to an impulse and their dynamic range is robust in respect to synaptic weights, and for some synaptic weights they oscillates in α or β frequencies, independent of external stimuli.

  1. An efficient neural network approach to dynamic robot motion planning.

    Science.gov (United States)

    Yang, S X; Meng, M

    2000-03-01

    In this paper, a biologically inspired neural network approach to real-time collision-free motion planning of mobile robots or robot manipulators in a nonstationary environment is proposed. Each neuron in the topologically organized neural network has only local connections, whose neural dynamics is characterized by a shunting equation. Thus the computational complexity linearly depends on the neural network size. The real-time robot motion is planned through the dynamic activity landscape of the neural network without any prior knowledge of the dynamic environment, without explicitly searching over the free workspace or the collision paths, and without any learning procedures. Therefore it is computationally efficient. The global stability of the neural network is guaranteed by qualitative analysis and the Lyapunov stability theory. The effectiveness and efficiency of the proposed approach are demonstrated through simulation studies.

  2. Programmable synaptic chip for electronic neural networks

    Science.gov (United States)

    Moopenn, A.; Langenbacher, H.; Thakoor, A. P.; Khanna, S. K.

    1988-01-01

    A binary synaptic matrix chip has been developed for electronic neural networks. The matrix chip contains a programmable 32X32 array of 'long channel' NMOSFET binary connection elements implemented in a 3-micron bulk CMOS process. Since the neurons are kept off-chip, the synaptic chip serves as a 'cascadable' building block for a multi-chip synaptic network as large as 512X512 in size. As an alternative to the programmable NMOSFET (long channel) connection elements, tailored thin film resistors are deposited, in series with FET switches, on some CMOS test chips, to obtain the weak synaptic connections. Although deposition and patterning of the resistors require additional processing steps, they promise substantial savings in silicon area. The performance of synaptic chip in a 32-neuron breadboard system in an associative memory test application is discussed.

  3. Dynamics of macro- and microscopic neural networks

    DEFF Research Database (Denmark)

    Mikkelsen, Kaare

    2014-01-01

    GN), which is a class of signals with a non-trivial low-frequency component. It is assumed that certain characteristica about the low-frequency component can yield information about the neural processes behind the signal. The method has been used in a range of different studies over the course of the past 10...... that the method continues to find use, of which examples are presented. In the second part of the thesis, numerical simulations of networks of neurons are described. To simplify the analysis, a relatively simpled neuron model - Leaky Integrate and Fire - is chosen. The strengths of the connections between...... shown that the syncronizing effect of the plasticity disappears when the strengths of the connections are frozen in time. Subsequently, the so-called ``Sisyphus'' mechanism is discussed, which is shown to cause slow fluctuations in the both the network synchronization and the strengths...

  4. A Convolutional Neural Network Neutrino Event Classifier

    CERN Document Server

    Aurisano, A; Rocco, D; Himmel, A; Messier, M D; Niner, E; Pawloski, G; Psihas, F; Sousa, A; Vahle, P

    2016-01-01

    Convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology without the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.

  5. Deep Convolutional Neural Networks: Structure, Feature Extraction and Training

    Directory of Open Access Journals (Sweden)

    Namatēvs Ivars

    2017-12-01

    Full Text Available Deep convolutional neural networks (CNNs are aimed at processing data that have a known network like topology. They are widely used to recognise objects in images and diagnose patterns in time series data as well as in sensor data classification. The aim of the paper is to present theoretical and practical aspects of deep CNNs in terms of convolution operation, typical layers and basic methods to be used for training and learning. Some practical applications are included for signal and image classification. Finally, the present paper describes the proposed block structure of CNN for classifying crucial features from 3D sensor data.

  6. Practical Application of Neural Networks in State Space Control

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon

    theoretic notions followed by a detailed description of the topology, neuron functions and learning rules of the two types of neural networks treated in the thesis, the multilayer perceptron and the neurofuzzy networks. In both cases, a Least Squares second-order gradient method is used to train....... Then the controller is shown to work on a simulation example. We also address the potential problem of too rapidly fluctuating parameters by including regularization in the learning rule. Next we develop a direct adaptive certainty-equivalence controller based on neurofuzzy models. The control loop is proven...

  7. Brain tumor segmentation with Deep Neural Networks.

    Science.gov (United States)

    Havaei, Mohammad; Davy, Axel; Warde-Farley, David; Biard, Antoine; Courville, Aaron; Bengio, Yoshua; Pal, Chris; Jodoin, Pierre-Marc; Larochelle, Hugo

    2017-01-01

    In this paper, we present a fully automatic brain tumor segmentation method based on Deep Neural Networks (DNNs). The proposed networks are tailored to glioblastomas (both low and high grade) pictured in MR images. By their very nature, these tumors can appear anywhere in the brain and have almost any kind of shape, size, and contrast. These reasons motivate our exploration of a machine learning solution that exploits a flexible, high capacity DNN while being extremely efficient. Here, we give a description of different model choices that we've found to be necessary for obtaining competitive performance. We explore in particular different architectures based on Convolutional Neural Networks (CNN), i.e. DNNs specifically adapted to image data. We present a novel CNN architecture which differs from those traditionally used in computer vision. Our CNN exploits both local features as well as more global contextual features simultaneously. Also, different from most traditional uses of CNNs, our networks use a final layer that is a convolutional implementation of a fully connected layer which allows a 40 fold speed up. We also describe a 2-phase training procedure that allows us to tackle difficulties related to the imbalance of tumor labels. Finally, we explore a cascade architecture in which the output of a basic CNN is treated as an additional source of information for a subsequent CNN. Results reported on the 2013 BRATS test data-set reveal that our architecture improves over the currently published state-of-the-art while being over 30 times faster. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Adaptive model predictive process control using neural networks

    Science.gov (United States)

    Buescher, K.L.; Baum, C.C.; Jones, R.D.

    1997-08-19

    A control system for controlling the output of at least one plant process output parameter is implemented by adaptive model predictive control using a neural network. An improved method and apparatus provides for sampling plant output and control input at a first sampling rate to provide control inputs at the fast rate. The MPC system is, however, provided with a network state vector that is constructed at a second, slower rate so that the input control values used by the MPC system are averaged over a gapped time period. Another improvement is a provision for on-line training that may include difference training, curvature training, and basis center adjustment to maintain the weights and basis centers of the neural in an updated state that can follow changes in the plant operation apart from initial off-line training data. 46 figs.

  9. Evolving Spiking Neural Networks for Control of Artificial Creatures

    Directory of Open Access Journals (Sweden)

    Arash Ahmadi

    2013-10-01

    Full Text Available To understand and analysis behavior of complicated and intelligent organisms, scientists apply bio-inspired concepts including evolution and learning to mathematical models and analyses. Researchers utilize these perceptions in different applications, searching for improved methods andapproaches for modern computational systems. This paper presents a genetic algorithm based evolution framework in which Spiking Neural Network (SNN of artificial creatures are evolved for higher chance of survival in a virtual environment. The artificial creatures are composed ofrandomly connected Izhikevich spiking reservoir neural networks using population activity rate coding. Inspired by biological neurons, the neuronal connections are considered with different axonal conduction delays. Simulations results prove that the evolutionary algorithm has thecapability to find or synthesis artificial creatures which can survive in the environment successfully.

  10. 78 FR 12359 - Goodman Networks, Inc., Core Network Engineering (Deployment Engineering) Division Including...

    Science.gov (United States)

    2013-02-22

    ... Employment and Training Administration Goodman Networks, Inc., Core Network Engineering (Deployment Engineering) Division Including Workers in the Core Network Engineering (Deployment Engineering) Division in... of Goodman Networks, Inc., Core Network Engineering (Deployment Engineering) Division, including...

  11. Proceedings of the Second Joint Technology Workshop on Neural Networks and Fuzzy Logic, volume 1

    Science.gov (United States)

    Lea, Robert N. (Editor); Villarreal, James (Editor)

    1991-01-01

    Documented here are papers presented at the Neural Networks and Fuzzy Logic Workshop sponsored by NASA and the University of Houston, Clear Lake. The workshop was held April 11 to 13 at the Johnson Space Flight Center. Technical topics addressed included adaptive systems, learning algorithms, network architectures, vision, robotics, neurobiological connections, speech recognition and synthesis, fuzzy set theory and application, control and dynamics processing, space applications, fuzzy logic and neural network computers, approximate reasoning, and multiobject decision making.

  12. Artificial neural networks in pancreatic disease.

    Science.gov (United States)

    Bartosch-Härlid, A; Andersson, B; Aho, U; Nilsson, J; Andersson, R

    2008-07-01

    An artificial neural network (ANNs) is a non-linear pattern recognition technique that is rapidly gaining in popularity in medical decision-making. This study investigated the use of ANNs for diagnostic and prognostic purposes in pancreatic disease, especially acute pancreatitis and pancreatic cancer. PubMed was searched for articles on the use of ANNs in pancreatic diseases using the MeSH terms 'neural networks (computer)', 'pancreatic neoplasms', 'pancreatitis' and 'pancreatic diseases'. A systematic review of the articles was performed. Eleven articles were identified, published between 1993 and 2007. The situations that lend themselves best to analysis by ANNs are complex multifactorial relationships, medical decisions when a second opinion is needed and when automated interpretation is required, for example in a situation of an inadequate number of experts. Conventional linear models have limitations in terms of diagnosis and prediction of outcome in acute pancreatitis and pancreatic cancer. Management of these disorders can be improved by applying ANNs to existing clinical parameters and newly established gene expression profiles. (c) 2008 British Journal of Surgery Society Ltd. Published by John Wiley & Sons, Ltd.

  13. BOUNDARY DEPTH INFORMATION USING HOPFIELD NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    S. Xu

    2016-06-01

    Full Text Available Depth information is widely used for representation, reconstruction and modeling of 3D scene. Generally two kinds of methods can obtain the depth information. One is to use the distance cues from the depth camera, but the results heavily depend on the device, and the accuracy is degraded greatly when the distance from the object is increased. The other one uses the binocular cues from the matching to obtain the depth information. It is more and more mature and convenient to collect the depth information of different scenes by stereo matching methods. In the objective function, the data term is to ensure that the difference between the matched pixels is small, and the smoothness term is to smooth the neighbors with different disparities. Nonetheless, the smoothness term blurs the boundary depth information of the object which becomes the bottleneck of the stereo matching. This paper proposes a novel energy function for the boundary to keep the discontinuities and uses the Hopfield neural network to solve the optimization. We first extract the region of interest areas which are the boundary pixels in original images. Then, we develop the boundary energy function to calculate the matching cost. At last, we solve the optimization globally by the Hopfield neural network. The Middlebury stereo benchmark is used to test the proposed method, and results show that our boundary depth information is more accurate than other state-of-the-art methods and can be used to optimize the results of other stereo matching methods.

  14. Maximum Entropy Approaches to Living Neural Networks

    Directory of Open Access Journals (Sweden)

    John M. Beggs

    2010-01-01

    Full Text Available Understanding how ensembles of neurons collectively interact will be a key step in developing a mechanistic theory of cognitive processes. Recent progress in multineuron recording and analysis techniques has generated tremendous excitement over the physiology of living neural networks. One of the key developments driving this interest is a new class of models based on the principle of maximum entropy. Maximum entropy models have been reported to account for spatial correlation structure in ensembles of neurons recorded from several different types of data. Importantly, these models require only information about the firing rates of individual neurons and their pairwise correlations. If this approach is generally applicable, it would drastically simplify the problem of understanding how neural networks behave. Given the interest in this method, several groups now have worked to extend maximum entropy models to account for temporal correlations. Here, we review how maximum entropy models have been applied to neuronal ensemble data to account for spatial and temporal correlations. We also discuss criticisms of the maximum entropy approach that argue that it is not generally applicable to larger ensembles of neurons. We conclude that future maximum entropy models will need to address three issues: temporal correlations, higher-order correlations, and larger ensemble sizes. Finally, we provide a brief list of topics for future research.

  15. Neural network analysis for hazardous waste characterization

    Energy Technology Data Exchange (ETDEWEB)

    Misra, M.; Pratt, L.Y.; Farris, C. [Colorado School of Mines, Golden, CO (United States)] [and others

    1995-12-31

    This paper is a summary of our work in developing a system for interpreting electromagnetic (EM) and magnetic sensor information from the dig face characterization experimental cell at INEL to determine the depth and nature of buried objects. This project contained three primary components: (1) development and evaluation of several geophysical interpolation schemes for correcting missing or noisy data, (2) development and evaluation of several wavelet compression schemes for removing redundancies from the data, and (3) construction of two neural networks that used the results of steps (1) and (2) to determine the depth and nature of buried objects. This work is a proof-of-concept study that demonstrates the feasibility of this approach. The resulting system was able to determine the nature of buried objects correctly 87% of the time and was able to locate a buried object to within an average error of 0.8 feet. These statistics were gathered based on a large test set and so can be considered reliable. Considering the limited nature of this study, these results strongly indicate the feasibility of this approach, and the importance of appropriate preprocessing of neural network input data.

  16. Object Classification Using Substance Based Neural Network

    Directory of Open Access Journals (Sweden)

    P. Sengottuvelan

    2014-01-01

    Full Text Available Object recognition has shown tremendous increase in the field of image analysis. The required set of image objects is identified and retrieved on the basis of object recognition. In this paper, we propose a novel classification technique called substance based image classification (SIC using a wavelet neural network. The foremost task of SIC is to remove the surrounding regions from an image to reduce the misclassified portion and to effectively reflect the shape of an object. At first, the image to be extracted is performed with SIC system through the segmentation of the image. Next, in order to attain more accurate information, with the extracted set of regions, the wavelet transform is applied for extracting the configured set of features. Finally, using the neural network classifier model, misclassification over the given natural images and further background images are removed from the given natural image using the LSEG segmentation. Moreover, to increase the accuracy of object classification, SIC system involves the removal of the regions in the surrounding image. Performance evaluation reveals that the proposed SIC system reduces the occurrence of misclassification and reflects the exact shape of an object to approximately 10–15%.

  17. Energy coding in neural network with inhibitory neurons.

    Science.gov (United States)

    Wang, Ziyin; Wang, Rubin; Fang, Ruiyan

    2015-04-01

    This paper aimed at assessing and comparing the effects of the inhibitory neurons in the neural network on the neural energy distribution, and the network activities in the absence of the inhibitory neurons to understand the nature of neural energy distribution and neural energy coding. Stimulus, synchronous oscillation has significant difference between neural networks with and without inhibitory neurons, and this difference can be quantitatively evaluated by the characteristic energy distribution. In addition, the synchronous oscillation difference of the neural activity can be quantitatively described by change of the energy distribution if the network parameters are gradually adjusted. Compared with traditional method of correlation coefficient analysis, the quantitative indicators based on nervous energy distribution characteristics are more effective in reflecting the dynamic features of the neural network activities. Meanwhile, this neural coding method from a global perspective of neural activity effectively avoids the current defects of neural encoding and decoding theory and enormous difficulties encountered. Our studies have shown that neural energy coding is a new coding theory with high efficiency and great potential.

  18. Learning of N-layers neural network

    Directory of Open Access Journals (Sweden)

    Vladimír Konečný

    2005-01-01

    Full Text Available In the last decade we can observe increasing number of applications based on the Artificial Intelligence that are designed to solve problems from different areas of human activity. The reason why there is so much interest in these technologies is that the classical way of solutions does not exist or these technologies are not suitable because of their robustness. They are often used in applications like Business Intelligence that enable to obtain useful information for high-quality decision-making and to increase competitive advantage.One of the most widespread tools for the Artificial Intelligence are the artificial neural networks. Their high advantage is relative simplicity and the possibility of self-learning based on set of pattern situations.For the learning phase is the most commonly used algorithm back-propagation error (BPE. The base of BPE is the method minima of error function representing the sum of squared errors on outputs of neural net, for all patterns of the learning set. However, while performing BPE and in the first usage, we can find out that it is necessary to complete the handling of the learning factor by suitable method. The stability of the learning process and the rate of convergence depend on the selected method. In the article there are derived two functions: one function for the learning process management by the relative great error function value and the second function when the value of error function approximates to global minimum.The aim of the article is to introduce the BPE algorithm in compact matrix form for multilayer neural networks, the derivation of the learning factor handling method and the presentation of the results.

  19. Performance of artificial neural networks and genetical evolved artificial neural networks unfolding techniques

    Energy Technology Data Exchange (ETDEWEB)

    Ortiz R, J. M. [Escuela Politecnica Superior, Departamento de Electrotecnia y Electronica, Avda. Menendez Pidal s/n, Cordoba (Spain); Martinez B, M. R.; Vega C, H. R. [Universidad Autonoma de Zacatecas, Unidad Academica de Estudios Nucleares, Calle Cipres No. 10, Fracc. La Penuela, 98068 Zacatecas (Mexico); Gallego D, E.; Lorente F, A. [Universidad Politecnica de Madrid, Departamento de Ingenieria Nuclear, ETSI Industriales, C. Jose Gutierrez Abascal 2, 28006 Madrid (Spain); Mendez V, R.; Los Arcos M, J. M.; Guerrero A, J. E., E-mail: morvymm@yahoo.com.m [CIEMAT, Laboratorio de Metrologia de Radiaciones Ionizantes, Avda. Complutense 22, 28040 Madrid (Spain)

    2011-02-15

    With the Bonner spheres spectrometer neutron spectrum is obtained through an unfolding procedure. Monte Carlo methods, Regularization, Parametrization, Least-squares, and Maximum Entropy are some of the techniques utilized for unfolding. In the last decade methods based on Artificial Intelligence Technology have been used. Approaches based on Genetic Algorithms and Artificial Neural Networks (Ann) have been developed in order to overcome the drawbacks of previous techniques. Nevertheless the advantages of Ann still it has some drawbacks mainly in the design process of the network, vg the optimum selection of the architectural and learning Ann parameters. In recent years the use of hybrid technologies, combining Ann and genetic algorithms, has been utilized to. In this work, several Ann topologies were trained and tested using Ann and Genetically Evolved Artificial Neural Networks in the aim to unfold neutron spectra using the count rates of a Bonner sphere spectrometer. Here, a comparative study of both procedures has been carried out. (Author)

  20. Semantic segmentation of bioimages using convolutional neural networks

    CSIR Research Space (South Africa)

    Wiehman, S

    2016-07-01

    Full Text Available Convolutional neural networks have shown great promise in both general image segmentation problems as well as bioimage segmentation. In this paper, the application of different convolutional network architectures is explored on the C. elegans live...

  1. Artificial neural networks with an infinite number of nodes

    Science.gov (United States)

    Blekas, K.; Lagaris, I. E.

    2017-10-01

    A new class of Artificial Neural Networks is described incorporating a node density function and functional weights. This network containing an infinite number of nodes, excels in generalizing and possesses a superior extrapolation capability.

  2. Altered Synchronizations among Neural Networks in Geriatric Depression.

    Science.gov (United States)

    Wang, Lihong; Chou, Ying-Hui; Potter, Guy G; Steffens, David C

    2015-01-01

    Although major depression has been considered as a manifestation of discoordinated activity between affective and cognitive neural networks, only a few studies have examined the relationships among neural networks directly. Because of the known disconnection theory, geriatric depression could be a useful model in studying the interactions among different networks. In the present study, using independent component analysis to identify intrinsically connected neural networks, we investigated the alterations in synchronizations among neural networks in geriatric depression to better understand the underlying neural mechanisms. Resting-state fMRI data was collected from thirty-two patients with geriatric depression and thirty-two age-matched never-depressed controls. We compared the resting-state activities between the two groups in the default-mode, central executive, attention, salience, and affective networks as well as correlations among these networks. The depression group showed stronger activity than the controls in an affective network, specifically within the orbitofrontal region. However, unlike the never-depressed controls, geriatric depression group lacked synchronized/antisynchronized activity between the affective network and the other networks. Those depressed patients with lower executive function has greater synchronization between the salience network with the executive and affective networks. Our results demonstrate the effectiveness of the between-network analyses in examining neural models for geriatric depression.

  3. Adaptive training of feedforward neural networks by Kalman filtering

    Energy Technology Data Exchange (ETDEWEB)

    Ciftcioglu, Oe. [Istanbul Technical Univ. (Turkey). Dept. of Electrical Engineering; Tuerkcan, E. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands)

    1995-02-01

    Adaptive training of feedforward neural networks by Kalman filtering is described. Adaptive training is particularly important in estimation by neural network in real-time environmental where the trained network is used for system estimation while the network is further trained by means of the information provided by the experienced/exercised ongoing operation. As result of this, neural network adapts itself to a changing environment to perform its mission without recourse to re-training. The performance of the training method is demonstrated by means of actual process signals from a nuclear power plant. (orig.).

  4. A reverse engineering algorithm for neural networks, applied to the subthalamopallidal network of basal ganglia.

    Science.gov (United States)

    Floares, Alexandru George

    2008-01-01

    Modeling neural networks with ordinary differential equations systems is a sensible approach, but also very difficult. This paper describes a new algorithm based on linear genetic programming which can be used to reverse engineer neural networks. The RODES algorithm automatically discovers the structure of the network, including neural connections, their signs and strengths, estimates its parameters, and can even be used to identify the biophysical mechanisms involved. The algorithm is tested on simulated time series data, generated using a realistic model of the subthalamopallidal network of basal ganglia. The resulting ODE system is highly accurate, and results are obtained in a matter of minutes. This is because the problem of reverse engineering a system of coupled differential equations is reduced to one of reverse engineering individual algebraic equations. The algorithm allows the incorporation of common domain knowledge to restrict the solution space. To our knowledge, this is the first time a realistic reverse engineering algorithm based on linear genetic programming has been applied to neural networks.

  5. Quantum Entanglement in Neural Network States

    Science.gov (United States)

    Deng, Dong-Ling; Li, Xiaopeng; Das Sarma, S.

    2017-04-01

    Machine learning, one of today's most rapidly growing interdisciplinary fields, promises an unprecedented perspective for solving intricate quantum many-body problems. Understanding the physical aspects of the representative artificial neural-network states has recently become highly desirable in the applications of machine-learning techniques to quantum many-body physics. In this paper, we explore the data structures that encode the physical features in the network states by studying the quantum entanglement properties, with a focus on the restricted-Boltzmann-machine (RBM) architecture. We prove that the entanglement entropy of all short-range RBM states satisfies an area law for arbitrary dimensions and bipartition geometry. For long-range RBM states, we show by using an exact construction that such states could exhibit volume-law entanglement, implying a notable capability of RBM in representing quantum states with massive entanglement. Strikingly, the neural-network representation for these states is remarkably efficient, in the sense that the number of nonzero parameters scales only linearly with the system size. We further examine the entanglement properties of generic RBM states by randomly sampling the weight parameters of the RBM. We find that their averaged entanglement entropy obeys volume-law scaling, and the meantime strongly deviates from the Page entropy of the completely random pure states. We show that their entanglement spectrum has no universal part associated with random matrix theory and bears a Poisson-type level statistics. Using reinforcement learning, we demonstrate that RBM is capable of finding the ground state (with power-law entanglement) of a model Hamiltonian with a long-range interaction. In addition, we show, through a concrete example of the one-dimensional symmetry-protected topological cluster states, that the RBM representation may also be used as a tool to analytically compute the entanglement spectrum. Our results uncover the

  6. Quantum Entanglement in Neural Network States

    Directory of Open Access Journals (Sweden)

    Dong-Ling Deng

    2017-05-01

    Full Text Available Machine learning, one of today’s most rapidly growing interdisciplinary fields, promises an unprecedented perspective for solving intricate quantum many-body problems. Understanding the physical aspects of the representative artificial neural-network states has recently become highly desirable in the applications of machine-learning techniques to quantum many-body physics. In this paper, we explore the data structures that encode the physical features in the network states by studying the quantum entanglement properties, with a focus on the restricted-Boltzmann-machine (RBM architecture. We prove that the entanglement entropy of all short-range RBM states satisfies an area law for arbitrary dimensions and bipartition geometry. For long-range RBM states, we show by using an exact construction that such states could exhibit volume-law entanglement, implying a notable capability of RBM in representing quantum states with massive entanglement. Strikingly, the neural-network representation for these states is remarkably efficient, in the sense that the number of nonzero parameters scales only linearly with the system size. We further examine the entanglement properties of generic RBM states by randomly sampling the weight parameters of the RBM. We find that their averaged entanglement entropy obeys volume-law scaling, and the meantime strongly deviates from the Page entropy of the completely random pure states. We show that their entanglement spectrum has no universal part associated with random matrix theory and bears a Poisson-type level statistics. Using reinforcement learning, we demonstrate that RBM is capable of finding the ground state (with power-law entanglement of a model Hamiltonian with a long-range interaction. In addition, we show, through a concrete example of the one-dimensional symmetry-protected topological cluster states, that the RBM representation may also be used as a tool to analytically compute the entanglement spectrum. Our

  7. Direct Adaptive Aircraft Control Using Dynamic Cell Structure Neural Networks

    Science.gov (United States)

    Jorgensen, Charles C.

    1997-01-01

    A Dynamic Cell Structure (DCS) Neural Network was developed which learns topology representing networks (TRNS) of F-15 aircraft aerodynamic stability and control derivatives. The network is integrated into a direct adaptive tracking controller. The combination produces a robust adaptive architecture capable of handling multiple accident and off- nominal flight scenarios. This paper describes the DCS network and modifications to the parameter estimation procedure. The work represents one step towards an integrated real-time reconfiguration control architecture for rapid prototyping of new aircraft designs. Performance was evaluated using three off-line benchmarks and on-line nonlinear Virtual Reality simulation. Flight control was evaluated under scenarios including differential stabilator lock, soft sensor failure, control and stability derivative variations, and air turbulence.

  8. Advanced obstacle avoidance for a laser based wheelchair using optimised Bayesian neural networks.

    Science.gov (United States)

    Trieu, Hoang T; Nguyen, Hung T; Willey, Keith

    2008-01-01

    In this paper we present an advanced method of obstacle avoidance for a laser based intelligent wheelchair using optimized Bayesian neural networks. Three neural networks are designed for three separate sub-tasks: passing through a door way, corridor and wall following and general obstacle avoidance. The accurate usable accessible space is determined by including the actual wheelchair dimensions in a real-time map used as inputs to each networks. Data acquisitions are performed separately to collect the patterns required for specified sub-tasks. Bayesian frame work is used to determine the optimal neural network structure in each case. Then these networks are trained under the supervision of Bayesian rule. Experiment results showed that compare to the VFH algorithm our neural networks navigated a smoother path following a near optimum trajectory.

  9. Rod-Shaped Neural Units for Aligned 3D Neural Network Connection.

    Science.gov (United States)

    Kato-Negishi, Midori; Onoe, Hiroaki; Ito, Akane; Takeuchi, Shoji

    2017-08-01

    This paper proposes neural tissue units with aligned nerve fibers (called rod-shaped neural units) that connect neural networks with aligned neurons. To make the proposed units, 3D fiber-shaped neural tissues covered with a calcium alginate hydrogel layer are prepared with a microfluidic system and are cut in an accurate and reproducible manner. These units have aligned nerve fibers inside the hydrogel layer and connectable points on both ends. By connecting the units with a poly(dimethylsiloxane) guide, 3D neural tissues can be constructed and maintained for more than two weeks of culture. In addition, neural networks can be formed between the different neural units via synaptic connections. Experimental results indicate that the proposed rod-shaped neural units are effective tools for the construction of spatially complex connections with aligned nerve fibers in vitro. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Multiple image sensor data fusion through artificial neural networks

    Science.gov (United States)

    With multisensor data fusion technology, the data from multiple sensors are fused in order to make a more accurate estimation of the environment through measurement, processing and analysis. Artificial neural networks are the computational models that mimic biological neural networks. With high per...

  11. Behaviour in O of the Neural Networks Training Cost

    DEFF Research Database (Denmark)

    Goutte, Cyril

    1998-01-01

    We study the behaviour in zero of the derivatives of the cost function used when training non-linear neural networks. It is shown that a fair number offirst, second and higher order derivatives vanish in zero, validating the belief that 0 is a peculiar and potentially harmful location....... These calculations arerelated to practical and theoretical aspects of neural networks training....

  12. Neural network model to control an experimental chaotic pendulum

    NARCIS (Netherlands)

    Bakker, R; Schouten, JC; Takens, F; vandenBleek, CM

    1996-01-01

    A feedforward neural network was trained to predict the motion of an experimental, driven, and damped pendulum operating in a chaotic regime. The network learned the behavior of the pendulum from a time series of the pendulum's angle, the single measured variable. The validity of the neural

  13. Classification of Urinary Calculi using Feed-Forward Neural Networks

    African Journals Online (AJOL)

    In this work the results of classification of these types of calculi (using their infrared spectra in the region 1450–450 cm–1) by feed-forward neural networks are presented. Genetic algorithms were used for optimization of neural networks and for selection of the spectral regions most suitable for classification purposes.

  14. Optimal Brain Surgeon on Artificial Neural Networks in

    DEFF Research Database (Denmark)

    Christiansen, Niels Hørbye; Job, Jonas Hultmann; Klyver, Katrine

    2012-01-01

    It is shown how the procedure know as optimal brain surgeon can be used to trim and optimize artificial neural networks in nonlinear structural dynamics. Beside optimizing the neural network, and thereby minimizing computational cost in simulation, the surgery procedure can also serve as a quick...

  15. Neural networks as a tool for unit commitment

    DEFF Research Database (Denmark)

    Rønne-Hansen, Peter; Rønne-Hansen, Jan

    1991-01-01

    Some of the fundamental problems when solving the power system unit commitment problem by means of neural networks have been attacked. It has been demonstrated for a small example that neural networks might be a viable alternative. Some of the major problems solved in this initiating phase form...

  16. Identification of Non-Linear Structures using Recurrent Neural Networks

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Nielsen, Søren R. K.; Hansen, H. I.

    1995-01-01

    Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure.......Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure....

  17. Classes of feedforward neural networks and their circuit complexity

    NARCIS (Netherlands)

    Shawe-Taylor, John S.; Anthony, Martin H.G.; Kern, Walter

    1992-01-01

    This paper aims to place neural networks in the context of boolean circuit complexity. We define appropriate classes of feedforward neural networks with specified fan-in, accuracy of computation and depth and using techniques of communication complexity proceed to show that the classes fit into a

  18. Identification of Non-Linear Structures using Recurrent Neural Networks

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Nielsen, Søren R. K.; Hansen, H. I.

    Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure.......Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure....

  19. Boosted jet identification using particle candidates and deep neural networks

    CERN Document Server

    CMS Collaboration

    2017-01-01

    This note presents developments for the identification of hadronically decaying top quarks using deep neural networks in CMS. A new method that utilizes one dimensional convolutional neural networks based on jet constituent particles is proposed. Alternative methods using boosted decision trees based on jet observables are compared. The new method shows significant improvement in performance.

  20. Mapping Neural Network Derived from the Parzen Window Estimator

    DEFF Research Database (Denmark)

    Schiøler, Henrik; Hartmann, U.

    1992-01-01

    The article presents a general theoretical basis for the construction of mapping neural networks. The theory is based on the Parzen Window estimator for......The article presents a general theoretical basis for the construction of mapping neural networks. The theory is based on the Parzen Window estimator for...

  1. Implementation of neural network based non-linear predictive control

    DEFF Research Database (Denmark)

    Sørensen, Paul Haase; Nørgård, Peter Magnus; Ravn, Ole

    1999-01-01

    of non-linear systems. GPC is model based and in this paper we propose the use of a neural network for the modeling of the system. Based on the neural network model, a controller with extended control horizon is developed and the implementation issues are discussed, with particular emphasis...

  2. Neural Network for Optimization of Existing Control Systems

    DEFF Research Database (Denmark)

    Madsen, Per Printz

    1995-01-01

    The purpose of this paper is to develop methods to use Neural Network based Controllers (NNC) as an optimization tool for existing control systems.......The purpose of this paper is to develop methods to use Neural Network based Controllers (NNC) as an optimization tool for existing control systems....

  3. Using Neural Networks to Predict MBA Student Success

    Science.gov (United States)

    Naik, Bijayananda; Ragothaman, Srinivasan

    2004-01-01

    Predicting MBA student performance for admission decisions is crucial for educational institutions. This paper evaluates the ability of three different models--neural networks, logit, and probit to predict MBA student performance in graduate programs. The neural network technique was used to classify applicants into successful and marginal student…

  4. Artificial Neural Network Modeling of an Inverse Fluidized Bed ...

    African Journals Online (AJOL)

    MICHAEL

    modeling of the inverse fluidized bed reactor. In the proposed model, the trained neural network represents the kinetics of biological decomposition of pollutants in the reactor. The neural network has been trained with experimental data obtained from an inverse fluidized bed reactor treating the starch industry wastewater.

  5. The harmonics detection method based on neural network applied ...

    African Journals Online (AJOL)

    user

    Consequently, many structures based on artificial neural network (ANN) have been developed in the literature, The most significant ... Keywords: Artificial Neural Networks (ANN), p-q theory, (SAPF), Harmonics, Total Harmonic Distortion. 1. ..... and pure shunt active fitters, IEEE 38th Conf on Industry Applications, Vol. 2, pp.

  6. Application of Neural Networks to House Pricing and Bond Rating

    NARCIS (Netherlands)

    Daniëls, H.A.M.; Kamp, B.; Verkooijen, W.J.H.

    1997-01-01

    Feed forward neural networks receive a growing attention as a data modelling tool in economic classification problems. It is well-known that controlling the design of a neural network can be cumbersome. Inaccuracies may lead to a manifold of problems in the application such as higher errors due to

  7. Neural Networks for Language Identification: A Comparative Study.

    Science.gov (United States)

    MacNamara, Shane; Cunningham, Padraig; Byrne, John

    1998-01-01

    Analyzes a neural network for its ability to perform a task involving identification of the language entries in a 19th-century library catalog containing entries in 14 different languages. Compares the neural network's performance with that of trigrams and a suffix/morphology analysis; the trigrams prove to be superior. (AEF)

  8. Multilayer perceptron neural network for downscaling rainfall in arid ...

    Indian Academy of Sciences (India)

    Multilayer perceptron neural network for downscaling rainfall in arid region: A case study of Baluchistan, Pakistan ... A multilayer perceptron (MLP) neural network has been proposed in the present study for the downscaling of rainfall in the data scarce arid region of Baluchistan province of Pakistan, which is considered as ...

  9. Application of a neural network for reflectance spectrum classification

    Science.gov (United States)

    Yang, Gefei; Gartley, Michael

    2017-05-01

    Traditional reflectance spectrum classification algorithms are based on comparing spectrum across the electromagnetic spectrum anywhere from the ultra-violet to the thermal infrared regions. These methods analyze reflectance on a pixel by pixel basis. Inspired by high performance that Convolution Neural Networks (CNN) have demonstrated in image classification, we applied a neural network to analyze directional reflectance pattern images. By using the bidirectional reflectance distribution function (BRDF) data, we can reformulate the 4-dimensional into 2 dimensions, namely incident direction × reflected direction × channels. Meanwhile, RIT's micro-DIRSIG model is utilized to simulate additional training samples for improving the robustness of the neural networks training. Unlike traditional classification by using hand-designed feature extraction with a trainable classifier, neural networks create several layers to learn a feature hierarchy from pixels to classifier and all layers are trained jointly. Hence, the our approach of utilizing the angular features are different to traditional methods utilizing spatial features. Although training processing typically has a large computational cost, simple classifiers work well when subsequently using neural network generated features. Currently, most popular neural networks such as VGG, GoogLeNet and AlexNet are trained based on RGB spatial image data. Our approach aims to build a directional reflectance spectrum based neural network to help us to understand from another perspective. At the end of this paper, we compare the difference among several classifiers and analyze the trade-off among neural networks parameters.

  10. Artificial neural networks in predicting current in electric arc furnaces

    Science.gov (United States)

    Panoiu, M.; Panoiu, C.; Iordan, A.; Ghiormez, L.

    2014-03-01

    The paper presents a study of the possibility of using artificial neural networks for the prediction of the current and the voltage of Electric Arc Furnaces. Multi-layer perceptron and radial based functions Artificial Neural Networks implemented in Matlab were used. The study is based on measured data items from an Electric Arc Furnace in an industrial plant in Romania.

  11. Parameter estimation of an aeroelastic aircraft using neural networks

    Indian Academy of Sciences (India)

    e-mail: scr@iitk.ac.in. Abstract. Application of neural networks to the problem of aerodynamic modelling and parameter estimation for aeroelastic aircraft is addressed. A neural model capable of ... of the network in terms of the number of neurons in the hidden layer, the learning rate, the momentum rate etc. is not an exact ...

  12. Artificial neural networks for prediction of percentage of water ...

    Indian Academy of Sciences (India)

    According to these input parameters, in the neural networks model, the percentage of water absorption of each specimen was predicted. The training and testing results in the neural networks model have shown a strong potential for predicting the percentage of water absorption of the geopolymer specimens.

  13. Comparative performance of some popular artificial neural network ...

    Indian Academy of Sciences (India)

    Comparative performance of some popular artificial neural network algorithms on benchmark and function approximation problems ... dynamic range of these functions, it is suggested that these functions can also be considered as standard benchmark problems for function approximation using artificial neural networks.

  14. Spacecraft Neural Network Control System Design using FPGA

    OpenAIRE

    Hanaa T. El-Madany; Faten H. Fahmy; Ninet M. A. El-Rahman; Hassen T. Dorrah

    2011-01-01

    Designing and implementing intelligent systems has become a crucial factor for the innovation and development of better products of space technologies. A neural network is a parallel system, capable of resolving paradigms that linear computing cannot. Field programmable gate array (FPGA) is a digital device that owns reprogrammable properties and robust flexibility. For the neural network based instrument prototype in real time application, conventional specific VLSI neural chip design suffer...

  15. IDENTIFICATION AND CONTROL OF AN ASYNCHRONOUS MACHINE USING NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    A ZERGAOUI

    2000-06-01

    Full Text Available In this work, we present the application of artificial neural networks to the identification and control of the asynchronous motor, which is a complex nonlinear system with variable internal dynamics.  We show that neural networks can be applied to control the stator currents of the induction motor.  The results of the different simulations are presented to evaluate the performance of the neural controller proposed.

  16. Estimating Type Ia Supernova Metallicities Using Neural Networks

    Science.gov (United States)

    Villar, V. Ashley

    2017-01-01

    Normal Type Ia supernovae (SNe) can be used as standardizable candles because their progenitors, white dwarfs, are a fairly homogenous class of objects. However, intrinsic variability in these events arise from a number of factors, including metallicity. Recent studies have investigated the effects of metallicity on Type Ia SNe observables from both a theoretical approach, by tuning model metallicity to analyze spectral features, and an observational approach, by studying the effect of host metallicity on light curves. In this work, we take a new, data-driven approach to the problem. Inspired by the success of neural networks in the field of image processing, we aim to estimate the metallicities of Type Ia SNe progenitors from their near-maximum spectra using feed-forward neural networks. We first collect a sample of near-maximum Type Ia SNe spectra from the literature to be smoothed and down-sampled. We then estimate the metallicities of the SNe hosts using the B-band magnitudes. We build a multilayer perceptron to generate a model that takes as input the down-sampled spectra and returns a scalar metallicity. Finally, we discuss basic considerations to be taken when working with spectral (as opposed to image) data using neural networks.

  17. Implementation of neural network for color properties of polycarbonates

    Science.gov (United States)

    Saeed, U.; Ahmad, S.; Alsadi, J.; Ross, D.; Rizvi, G.

    2014-05-01

    In present paper, the applicability of artificial neural networks (ANN) is investigated for color properties of plastics. The neural networks toolbox of Matlab 6.5 is used to develop and test the ANN model on a personal computer. An optimal design is completed for 10, 12, 14,16,18 & 20 hidden neurons on single hidden layer with five different algorithms: batch gradient descent (GD), batch variable learning rate (GDX), resilient back-propagation (RP), scaled conjugate gradient (SCG), levenberg-marquardt (LM) in the feed forward back-propagation neural network model. The training data for ANN is obtained from experimental measurements. There were twenty two inputs including resins, additives & pigments while three tristimulus color values L*, a* and b* were used as output layer. Statistical analysis in terms of Root-Mean-Squared (RMS), absolute fraction of variance (R squared), as well as mean square error is used to investigate the performance of ANN. LM algorithm with fourteen neurons on hidden layer in Feed Forward Back-Propagation of ANN model has shown best result in the present study. The degree of accuracy of the ANN model in reduction of errors is proven acceptable in all statistical analysis and shown in results. However, it was concluded that ANN provides a feasible method in error reduction in specific color tristimulus values.

  18. Optical Neural Network Models Applied To Logic Program Execution

    Science.gov (United States)

    Stormon, Charles D.

    1988-05-01

    Logic programming is being used extensively by Artificial Intelligence researchers to solve problems including natural language processing and expert systems. These languages, of which Prolog is the most widely used, promise to revolutionize software engineering, but much greater performance is needed. Researchers have demonstrated the applicability of neural network models to the solution of certain NP-complete problems, but these methods are not obviously applicable to the execution of logic programs. This paper outlines the use of neural networks in four aspects of the logic program execution cycle, and discusses results of a simulation of three of these. Four neural network functional units are described, called the substitution agent, the clause filter, the structure processor, and the heuristics generator, respectively. Simulation results suggest that the system described may provide several orders of magnitude improvement in execution speed for large logic programs. However, practical implementation of the proposed architecture will require the application of optical computing techniques due to the large number of neurons required, and the need for massive, adaptive connectivity.

  19. Artificial neural networks for stiffness estimation in magnetic resonance elastography.

    Science.gov (United States)

    Murphy, Matthew C; Manduca, Armando; Trzasko, Joshua D; Glaser, Kevin J; Huston, John; Ehman, Richard L

    2017-11-28

    To investigate the feasibility of using artificial neural networks to estimate stiffness from MR elastography (MRE) data. Artificial neural networks were fit using model-based training patterns to estimate stiffness from images of displacement using a patch size of ∼1 cm in each dimension. These neural network inversions (NNIs) were then evaluated in a set of simulation experiments designed to investigate the effects of wave interference and noise on NNI accuracy. NNI was also tested in vivo, comparing NNI results against currently used methods. In 4 simulation experiments, NNI performed as well or better than direct inversion (DI) for predicting the known stiffness of the data. Summary NNI results were also shown to be significantly correlated with DI results in the liver (R 2  = 0.974) and in the brain (R 2  = 0.915), and also correlated with established biological effects including fibrosis stage in the liver and age in the brain. Finally, repeatability error was lower in the brain using NNI compared to DI, and voxel-wise modeling using NNI stiffness maps detected larger effects than using DI maps with similar levels of smoothing. Artificial neural networks represent a new approach to inversion of MRE data. Summary results from NNI and DI are highly correlated and both are capable of detecting biologically relevant signals. Preliminary evidence suggests that NNI stiffness estimates may be more resistant to noise than an algebraic DI approach. Taken together, these results merit future investigation into NNIs to improve the estimation of stiffness in small regions. Magn Reson Med, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  20. LMI Conditions for Global Stability of Fractional-Order Neural Networks.

    Science.gov (United States)

    Zhang, Shuo; Yu, Yongguang; Yu, Junzhi

    2017-10-01

    Fractional-order neural networks play a vital role in modeling the information processing of neuronal interactions. It is still an open and necessary topic for fractional-order neural networks to investigate their global stability. This paper proposes some simplified linear matrix inequality (LMI) stability conditions for fractional-order linear and nonlinear systems. Then, the global stability analysis of fractional-order neural networks employs the results from the obtained LMI conditions. In the LMI form, the obtained results include the existence and uniqueness of equilibrium point and its global stability, which simplify and extend some previous work on the stability analysis of the fractional-order neural networks. Moreover, a generalized projective synchronization method between such neural systems is given, along with its corresponding LMI condition. Finally, two numerical examples are provided to illustrate the effectiveness of the established LMI conditions.

  1. A Novel Neural Network for Generally Constrained Variational Inequalities.

    Science.gov (United States)

    Gao, Xingbao; Liao, Li-Zhi

    2017-09-01

    This paper presents a novel neural network for solving generally constrained variational inequality problems by constructing a system of double projection equations. By defining proper convex energy functions, the proposed neural network is proved to be stable in the sense of Lyapunov and converges to an exact solution of the original problem for any starting point under the weaker cocoercivity condition or the monotonicity condition of the gradient mapping on the linear equation set. Furthermore, two sufficient conditions are provided to ensure the stability of the proposed neural network for a special case. The proposed model overcomes some shortcomings of existing continuous-time neural networks for constrained variational inequality, and its stability only requires some monotonicity conditions of the underlying mapping and the concavity of nonlinear inequality constraints on the equation set. The validity and transient behavior of the proposed neural network are demonstrated by some simulation results.

  2. Neural network for constrained nonsmooth optimization using Tikhonov regularization.

    Science.gov (United States)

    Qin, Sitian; Fan, Dejun; Wu, Guangxi; Zhao, Lijun

    2015-03-01

    This paper presents a one-layer neural network to solve nonsmooth convex optimization problems based on the Tikhonov regularization method. Firstly, it is shown that the optimal solution of the original problem can be approximated by the optimal solution of a strongly convex optimization problems. Then, it is proved that for any initial point, the state of the proposed neural network enters the equality feasible region in finite time, and is globally convergent to the unique optimal solution of the related strongly convex optimization problems. Compared with the existing neural networks, the proposed neural network has lower model complexity and does not need penalty parameters. In the end, some numerical examples and application are given to illustrate the effectiveness and improvement of the proposed neural network. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Obstacle avoidance for power wheelchair using bayesian neural network.

    Science.gov (United States)

    Trieu, Hoang T; Nguyen, Hung T; Willey, Keith

    2007-01-01

    In this paper we present a real-time obstacle avoidance algorithm using a Bayesian neural network for a laser based wheelchair system. The raw laser data is modified to accommodate the wheelchair dimensions, allowing the free-space to be determined accurately in real-time. Data acquisition is performed to collect the patterns required for training the neural network. A Bayesian frame work is applied to determine the optimal neural network structure for the training data. This neural network is trained under the supervision of the Bayesian rule and the obstacle avoidance task is then implemented for the wheelchair system. Initial results suggest this approach provides an effective solution for autonomous tasks, suggesting Bayesian neural networks may be useful for wider assistive technology applications.

  4. Investigation of efficient features for image recognition by neural networks.

    Science.gov (United States)

    Goltsev, Alexander; Gritsenko, Vladimir

    2012-04-01

    In the paper, effective and simple features for image recognition (named LiRA-features) are investigated in the task of handwritten digit recognition. Two neural network classifiers are considered-a modified 3-layer perceptron LiRA and a modular assembly neural network. A method of feature selection is proposed that analyses connection weights formed in the preliminary learning process of a neural network classifier. In the experiments using the MNIST database of handwritten digits, the feature selection procedure allows reduction of feature number (from 60 000 to 7000) preserving comparable recognition capability while accelerating computations. Experimental comparison between the LiRA perceptron and the modular assembly neural network is accomplished, which shows that recognition capability of the modular assembly neural network is somewhat better. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. Issues in the use of neural networks in information retrieval

    CERN Document Server

    Iatan, Iuliana F

    2017-01-01

    This book highlights the ability of neural networks (NNs) to be excellent pattern matchers and their importance in information retrieval (IR), which is based on index term matching. The book defines a new NN-based method for learning image similarity and describes how to use fuzzy Gaussian neural networks to predict personality. It introduces the fuzzy Clifford Gaussian network, and two concurrent neural models: (1) concurrent fuzzy nonlinear perceptron modules, and (2) concurrent fuzzy Gaussian neural network modules. Furthermore, it explains the design of a new model of fuzzy nonlinear perceptron based on alpha level sets and describes a recurrent fuzzy neural network model with a learning algorithm based on the improved particle swarm optimization method.

  6. Neural Networks for Synthesis and Optimization of Antenna Arrays

    Directory of Open Access Journals (Sweden)

    S. A. Djennas

    2007-04-01

    Full Text Available This paper describes a usual application of back-propagation neural networks for synthesis and optimization of antenna array. The neural network is able to model and to optimize the antennas arrays, by acting on radioelectric or geometric parameters and by taking into account predetermined general criteria. The neural network allows not only establishing important analytical equations for the optimization step, but also a great flexibility between the system parameters in input and output. This step of optimization becomes then possible due to the explicit relation given by the neural network. According to different formulations of the synthesis problem such as acting on the feed law (amplitude and/or phase and/or space position of the radiating sources, results on antennas arrays synthesis and optimization by neural networks are presented and discussed. However ANN is able to generate very fast the results of synthesis comparing to other approaches.

  7. Sea level forecasts using neural networks

    Science.gov (United States)

    Röske, Frank

    1997-03-01

    In this paper, a new method for predicting the sea level employing a neural network approach is introduced. It was designed to improve the prediction of the sea level along the German North Sea Coast under standard conditions. The sea level at any given time depends upon the tides as well as meteorological and oceanographic factors, such as the winds and external surges induced by air pressure. Since tidal predictions are already sufficiently accurate, they have been subtracted from the observed sea levels. The differences will be predicted up to 18 hours in advance. In this paper, the differences are called anomalies. The prediction of the sea level each hour is distinguished from its predictions at the times of high and low tide. For this study, Cuxhaven was selected as a reference site. The predictions made using neural networks were compared for accuracy with the prognoses prepared using six models: two hydrodynamic models, a statistical model, a nearest neighbor model, which is based on analogies, the persistence model, and the verbal forecasts that are broadcast and kept on record by the Sea Level Forecast Service of the Federal Maritime and Hydrography Agency (BSH) in Hamburg. Predictions were calculated for the year 1993 and compared with the actual levels measured. Artificial neural networks are capable of learning. By applying them to the prediction of sea levels, learning from past events has been attempted. It was also attempted to make the experiences of expert forecasters objective. Instead of using the wide-spread back-propagation networks, the self-organizing feature map of Kohonen, or “Kohonen network”, was applied. The fundamental principle of this network is the transformation of the signal similarity into the neighborhood of the neurons while preserving the topology of the signal space. The self-organization procedure of Kohonen networks can be visualized. To make predictions, these networks have been subdivided into a part describing the

  8. A neural network for noise correlation classification

    Science.gov (United States)

    Paitz, Patrick; Gokhberg, Alexey; Fichtner, Andreas

    2018-02-01

    We present an artificial neural network (ANN) for the classification of ambient seismic noise correlations into two categories, suitable and unsuitable for noise tomography. By using only a small manually classified data subset for network training, the ANN allows us to classify large data volumes with low human effort and to encode the valuable subjective experience of data analysts that cannot be captured by a deterministic algorithm. Based on a new feature extraction procedure that exploits the wavelet-like nature of seismic time-series, we efficiently reduce the dimensionality of noise correlation data, still keeping relevant features needed for automated classification. Using global- and regional-scale data sets, we show that classification errors of 20 per cent or less can be achieved when the network training is performed with as little as 3.5 per cent and 16 per cent of the data sets, respectively. Furthermore, the ANN trained on the regional data can be applied to the global data, and vice versa, without a significant increase of the classification error. An experiment where four students manually classified the data, revealed that the classification error they would assign to each other is substantially larger than the classification error of the ANN (>35 per cent). This indicates that reproducibility would be hampered more by human subjectivity than by imperfections of the ANN.

  9. Antagonistic neural networks underlying differentiated leadership roles.

    Science.gov (United States)

    Boyatzis, Richard E; Rochford, Kylie; Jack, Anthony I

    2014-01-01

    The emergence of two distinct leadership roles, the task leader and the socio-emotional leader, has been documented in the leadership literature since the 1950s. Recent research in neuroscience suggests that the division between task-oriented and socio-emotional-oriented roles derives from a fundamental feature of our neurobiology: an antagonistic relationship between two large-scale cortical networks - the task-positive network (TPN) and the default mode network (DMN). Neural activity in TPN tends to inhibit activity in the DMN, and vice versa. The TPN is important for problem solving, focusing of attention, making decisions, and control of action. The DMN plays a central role in emotional self-awareness, social cognition, and ethical decision making. It is also strongly linked to creativity and openness to new ideas. Because activation of the TPN tends to suppress activity in the DMN, an over-emphasis on task-oriented leadership may prove deleterious to social and emotional aspects of leadership. Similarly, an overemphasis on the DMN would result in difficulty focusing attention, making decisions, and solving known problems. In this paper, we will review major streams of theory and research on leadership roles in the context of recent findings from neuroscience and psychology. We conclude by suggesting that emerging research challenges the assumption that role differentiation is both natural and necessary, in particular when openness to new ideas, people, emotions, and ethical concerns are important to success.

  10. Antagonistic Neural Networks Underlying Differentiated Leadership Roles

    Directory of Open Access Journals (Sweden)

    Richard Eleftherios Boyatzis

    2014-03-01

    Full Text Available The emergence of two distinct leadership roles, the task leader and the socio-emotional leader, has been documented in the leadership literature since the 1950’s. Recent research in neuroscience suggests that the division between task oriented and socio-emotional oriented roles derives from a fundamental feature of our neurobiology: an antagonistic relationship between two large-scale cortical networks -- the Task Positive Network (TPN and the Default Mode Network (DMN. Neural activity in TPN tends to inhibit activity in the DMN, and vice versa. The TPN is important for problem solving, focusing of attention, making decisions, and control of action. The DMN plays a central role in emotional self-awareness, social cognition, and ethical decision making. It is also strongly linked to creativity and openness to new ideas. Because activation of the TPN tends to suppress activity in the DMN, an over-emphasis on task oriented leadership may prove deleterious to social and emotional aspects of leadership. Similarly, an overemphasis on the DMN would result in difficulty focusing attention, making decisions and solving known problems. In this paper, we will review major streams of theory and research on leadership roles in the context of recent findings from neuroscience and psychology. We conclude by suggesting that emerging research challenges the assumption that role differentiation is both natural and necessary, in particular when openness to new ideas, people, emotions, and ethical concerns are important to success.

  11. Antagonistic neural networks underlying differentiated leadership roles

    Science.gov (United States)

    Boyatzis, Richard E.; Rochford, Kylie; Jack, Anthony I.

    2014-01-01

    The emergence of two distinct leadership roles, the task leader and the socio-emotional leader, has been documented in the leadership literature since the 1950s. Recent research in neuroscience suggests that the division between task-oriented and socio-emotional-oriented roles derives from a fundamental feature of our neurobiology: an antagonistic relationship between two large-scale cortical networks – the task-positive network (TPN) and the default mode network (DMN). Neural activity in TPN tends to inhibit activity in the DMN, and vice versa. The TPN is important for problem solving, focusing of attention, making decisions, and control of action. The DMN plays a central role in emotional self-awareness, social cognition, and ethical decision making. It is also strongly linked to creativity and openness to new ideas. Because activation of the TPN tends to suppress activity in the DMN, an over-emphasis on task-oriented leadership may prove deleterious to social and emotional aspects of leadership. Similarly, an overemphasis on the DMN would result in difficulty focusing attention, making decisions, and solving known problems. In this paper, we will review major streams of theory and research on leadership roles in the context of recent findings from neuroscience and psychology. We conclude by suggesting that emerging research challenges the assumption that role differentiation is both natural and necessary, in particular when openness to new ideas, people, emotions, and ethical concerns are important to success. PMID:24624074

  12. Combining neural networks for protein secondary structure prediction

    DEFF Research Database (Denmark)

    Riis, Søren Kamaric

    1995-01-01

    In this paper structured neural networks are applied to the problem of predicting the secondary structure of proteins. A hierarchical approach is used where specialized neural networks are designed for each structural class and then combined using another neural network. The submodels are designed...... by using a priori knowledge of the mapping between protein building blocks and the secondary structure and by using weight sharing. Since none of the individual networks have more than 600 adjustable weights over-fitting is avoided. When ensembles of specialized experts are combined the performance...

  13. Ship Benchmark Shaft and Engine Gain FDI Using Neural Network

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Izadi-Zamanabadi, Roozbeh

    2002-01-01

    This paper concerns fault detection and isolation based on neural network modeling. A neural network is trained to recognize the input-output behavior of a nonlinear plant, and faults are detected if the output estimated by the network differs from the measured plant output by more than a specified...... threshold value. In the paper a method for determining this threshold based on the neural network model is proposed, which can be used for a design strategy to handle residual sensitivity to input variations. The proposed method is used for successful FDI of a diesel engine gain fault in a ship propulsion...

  14. Forex Market Prediction Using NARX Neural Network with Bagging

    Directory of Open Access Journals (Sweden)

    Shahbazi Nima

    2016-01-01

    Full Text Available We propose a new methodfor predicting movements in Forex market based on NARX neural network withtime shifting bagging techniqueand financial indicators, such as relative strength index and stochastic indicators. Neural networks have prominent learning ability but they often exhibit bad and unpredictable performance for noisy data. When compared with the static neural networks, our method significantly reducesthe error rate of the responseandimproves the performance of the prediction. We tested three different types ofarchitecture for predicting the response and determined the best network approach. We applied our method to prediction the hourly foreign exchange rates and found remarkable predictability in comprehensive experiments with 2 different foreign exchange rates (GBPUSD and EURUSD.

  15. Neural networks forecast in small catchments with transfer of network parameters

    Science.gov (United States)

    Maca, P.; Havlicek, V.; Hermanovsky, M.; Horacek, S.; Pech, P.

    2009-04-01

    This contribution deals with neural network approach for short term forecast on small catchments. The applied methodology is based on theory of multilayer perceptron (MLP), feed forward neural network with back propagation optimization procedure was tested in order to explore the possibilities to transfer parameters between different catchments. Supervised optimization of network parameters and structure was investigated. A software tool was created for these research and operative purposes. The hourly discharges and rainfall data of real flood events served as an input to MLP. Seven catchments with areas, which range from 10 to 250 square kilometres and which are situated in the east part of the Czech Republic, were selected. The input data were normalized by parametric method. Variable configuration of neural network was tested in number of modes represented by different combination of learning and testing data sets. The analysis focuses on ability of the model to forecast the flood event with different peak discharge magnitudes. This should be achieved in both application steps - MLP learning and testing within given catchment and in step of parameter transfer of well learned network to another catchment. The length of prediction ranged from one hour to six hours ahead. The results showed that the model is capable to provide satisfying short term discharge forecast for the most of studied cases, including successful parameter transfer among different catchments. This was accomplished by using optimization of parameters which determine not only the structure and behaviour of applied network but also the transformation of input data.

  16. Sign Language Recognition using Neural Networks

    Directory of Open Access Journals (Sweden)

    Sabaheta Djogic

    2014-11-01

    Full Text Available – Sign language plays a great role as communication media for people with hearing difficulties.In developed countries, systems are made for overcoming a problem in communication with deaf people. This encouraged us to develop a system for the Bosnian sign language since there is a need for such system. The work is done with the use of digital image processing methods providing a system that teaches a multilayer neural network using a back propagation algorithm. Images are processed by feature extraction methods, and by masking method the data set has been created. Training is done using cross validation method for better performance thus; an accuracy of 84% is achieved.

  17. Recognition of Gestures using Artifical Neural Network

    Directory of Open Access Journals (Sweden)

    Marcel MORE

    2013-12-01

    Full Text Available Sensors for motion measurements are now becoming more widespread. Thanks to their parameters and affordability they are already used not only in the professional sector, but also in devices intended for daily use or entertainment. One of their applications is in control of devices by gestures. Systems that can determine type of gesture from measured motion have many uses. Some are for example in medical practice, but they are still more often used in devices such as cell phones, where they serve as a non-standard form of input. Today there are already several approaches for solving this problem, but building sufficiently reliable system is still a challenging task. In our project we are developing solution based on artificial neural network. In difference to other solutions, this one doesn’t require building model for each measuring system and thus it can be used in combination with various sensors just with minimal changes in his structure.

  18. Spatial Dynamics of Multilayer Cellular Neural Networks

    Science.gov (United States)

    Wu, Shi-Liang; Hsu, Cheng-Hsiung

    2018-02-01

    The purpose of this work is to study the spatial dynamics of one-dimensional multilayer cellular neural networks. We first establish the existence of rightward and leftward spreading speeds of the model. Then we show that the spreading speeds coincide with the minimum wave speeds of the traveling wave fronts in the right and left directions. Moreover, we obtain the asymptotic behavior of the traveling wave fronts when the wave speeds are positive and greater than the spreading speeds. According to the asymptotic behavior and using various kinds of comparison theorems, some front-like entire solutions are constructed by combining the rightward and leftward traveling wave fronts with different speeds and a spatially homogeneous solution of the model. Finally, various qualitative features of such entire solutions are investigated.

  19. Enhancing Hohlraum Design with Artificial Neural Networks

    Science.gov (United States)

    Peterson, J. L.; Berzak Hopkins, L. F.; Humbird, K. D.; Brandon, S. T.; Field, J. E.; Langer, S. H.; Nora, R. C.; Spears, B. K.

    2017-10-01

    A primary goal of hohlraum design is to efficiently convert available laser power and energy to capsule drive, compression and ultimately fusion neutron yield. However, a major challenge of this multi-dimensional optimization problem is the relative computational expense of hohlraum simulations. In this work, we explore overcoming this obstacle with the use of artificial neural networks built off ensembles of hohlraum simulations. These machine learning systems emulate the behavior of full simulations in a fraction of the time, thereby enabling the rapid exploration of design parameters. We will demonstrate this technology with a search for modifications to existing high-yield designs that can maximize neutron production within NIF's current laser power and energy constraints. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-734401.

  20. Tropical Timber Identification using Backpropagation Neural Network

    Science.gov (United States)

    Siregar, B.; Andayani, U.; Fatihah, N.; Hakim, L.; Fahmi, F.

    2017-01-01

    Each and every type of wood has different characteristics. Identifying the type of wood properly is important, especially for industries that need to know the type of timber specifically. However, it requires expertise in identifying the type of wood and only limited experts available. In addition, the manual identification even by experts is rather inefficient because it requires a lot of time and possibility of human errors. To overcome these problems, a digital image based method to identify the type of timber automatically is needed. In this study, backpropagation neural network is used as artificial intelligence component. Several stages were developed: a microscope image acquisition, pre-processing, feature extraction using gray level co-occurrence matrix and normalization of data extraction using decimal scaling features. The results showed that the proposed method was able to identify the timber with an accuracy of 94%.

  1. Neural network training as a dissipative process.

    Science.gov (United States)

    Gori, Marco; Maggini, Marco; Rossi, Alessandro

    2016-09-01

    This paper analyzes the practical issues and reports some results on a theory in which learning is modeled as a continuous temporal process driven by laws describing the interactions of intelligent agents with their own environment. The classic regularization framework is paired with the idea of temporal manifolds by introducing the principle of least cognitive action, which is inspired by the related principle of mechanics. The introduction of the counterparts of the kinetic and potential energy leads to an interpretation of learning as a dissipative process. As an example, we apply the theory to supervised learning in neural networks and show that the corresponding Euler-Lagrange differential equations can be connected to the classic gradient descent algorithm on the supervised pairs. We give preliminary experiments to confirm the soundness of the theory. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. A hybrid ART-GRNN online learning neural network with a epsilon -insensitive loss function.

    Science.gov (United States)

    Yap, Keem Siah; Lim, Chee Peng; Abidin, Izham Zainal

    2008-09-01

    In this brief, a new neural network model called generalized adaptive resonance theory (GART) is introduced. GART is a hybrid model that comprises a modified Gaussian adaptive resonance theory (MGA) and the generalized regression neural network (GRNN). It is an enhanced version of the GRNN, which preserves the online learning properties of adaptive resonance theory (ART). A series of empirical studies to assess the effectiveness of GART in classification, regression, and time series prediction tasks is conducted. The results demonstrate that GART is able to produce good performances as compared with those of other methods, including the online sequential extreme learning machine (OSELM) and sequential learning radial basis function (RBF) neural network models.

  3. The performance of immune-based neural network with financial time series prediction

    Directory of Open Access Journals (Sweden)

    Dhiya Al-Jumeily

    2015-12-01

    Full Text Available This paper presents the use of immune-based neural networks that include multilayer perceptron (MLP and functional neural network for the prediction of financial time series signals. Extensive simulations for the prediction of one- and five-steps-ahead of stationary and non-stationary time series were performed which indicate that immune-based neural networks in most cases demonstrated advantages in capturing chaotic movement in the financial signals with an improvement in the profit return and rapid convergence over MLPs.

  4. Comparison between extreme learning machine and wavelet neural networks in data classification

    Science.gov (United States)

    Yahia, Siwar; Said, Salwa; Jemai, Olfa; Zaied, Mourad; Ben Amar, Chokri

    2017-03-01

    Extreme learning Machine is a well known learning algorithm in the field of machine learning. It's about a feed forward neural network with a single-hidden layer. It is an extremely fast learning algorithm with good generalization performance. In this paper, we aim to compare the Extreme learning Machine with wavelet neural networks, which is a very used algorithm. We have used six benchmark data sets to evaluate each technique. These datasets Including Wisconsin Breast Cancer, Glass Identification, Ionosphere, Pima Indians Diabetes, Wine Recognition and Iris Plant. Experimental results have shown that both extreme learning machine and wavelet neural networks have reached good results.

  5. Financial time series prediction using spiking neural networks.

    Directory of Open Access Journals (Sweden)

    David Reid

    Full Text Available In this paper a novel application of a particular type of spiking neural network, a Polychronous Spiking Network, was used for financial time series prediction. It is argued that the inherent temporal capabilities of this type of network are suited to non-stationary data such as this. The performance of the spiking neural network was benchmarked against three systems: two "traditional", rate-encoded, neural networks; a Multi-Layer Perceptron neural network and a Dynamic Ridge Polynomial neural network, and a standard Linear Predictor Coefficients model. For this comparison three non-stationary and noisy time series were used: IBM stock data; US/Euro exchange rate data, and the price of Brent crude oil. The experiments demonstrated favourable prediction results for the Spiking Neural Network in terms of Annualised Return and prediction error for 5-Step ahead predictions. These results were also supported by other relevant metrics such as Maximum Drawdown and Signal-To-Noise ratio. This work demonstrated the applicability of the Polychronous Spiking Network to financial data forecasting and this in turn indicates the potential of using such networks over traditional systems in difficult to manage non-stationary environments.

  6. Solving differential equations with unknown constitutive relations as recurrent neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Hagge, Tobias J.; Stinis, Panagiotis; Yeung, Enoch H.; Tartakovsky, Alexandre M.

    2017-12-08

    We solve a system of ordinary differential equations with an unknown functional form of a sink (reaction rate) term. We assume that the measurements (time series) of state variables are partially available, and use a recurrent neural network to “learn” the reaction rate from this data. This is achieved by including discretized ordinary differential equations as part of a recurrent neural network training problem. We extend TensorFlow’s recurrent neural network architecture to create a simple but scalable and effective solver for the unknown functions, and apply it to a fedbatch bioreactor simulation problem. Use of techniques from recent deep learning literature enables training of functions with behavior manifesting over thousands of time steps. Our networks are structurally similar to recurrent neural networks, but differ in purpose, and require modified training strategies.

  7. High-Dimensional Function Approximation With Neural Networks for Large Volumes of Data.

    Science.gov (United States)

    Andras, Peter

    2018-02-01

    Approximation of high-dimensional functions is a challenge for neural networks due to the curse of dimensionality. Often the data for which the approximated function is defined resides on a low-dimensional manifold and in principle the approximation of the function over this manifold should improve the approximation performance. It has been show that projecting the data manifold into a lower dimensional space, followed by the neural network approximation of the function over this space, provides a more precise approximation of the function than the approximation of the function with neural networks in the original data space. However, if the data volume is very large, the projection into the low-dimensional space has to be based on a limited sample of the data. Here, we investigate the nature of the approximation error of neural networks trained over the projection space. We show that such neural networks should have better approximation performance than neural networks trained on high-dimensional data even if the projection is based on a relatively sparse sample of the data manifold. We also find that it is preferable to use a uniformly distributed sparse sample of the data for the purpose of the generation of the low-dimensional projection. We illustrate these results considering the practical neural network approximation of a set of functions defined on high-dimensional data including real world data as well.

  8. Analog neural network-based helicopter gearbox health monitoring system.

    Science.gov (United States)

    Monsen, P T; Dzwonczyk, M; Manolakos, E S

    1995-12-01

    The development of a reliable helicopter gearbox health monitoring system (HMS) has been the subject of considerable research over the past 15 years. The deployment of such a system could lead to a significant saving in lives and vehicles as well as dramatically reduce the cost of helicopter maintenance. Recent research results indicate that a neural network-based system could provide a viable solution to the problem. This paper presents two neural network-based realizations of an HMS system. A hybrid (digital/analog) neural system is proposed as an extremely accurate off-line monitoring tool used to reduce helicopter gearbox maintenance costs. In addition, an all analog neural network is proposed as a real-time helicopter gearbox fault monitor that can exploit the ability of an analog neural network to directly compute the discrete Fourier transform (DFT) as a sum of weighted samples. Hardware performance results are obtained using the Integrated Neural Computing Architecture (INCA/1) analog neural network platform that was designed and developed at The Charles Stark Draper Laboratory. The results indicate that it is possible to achieve a 100% fault detection rate with 0% false alarm rate by performing a DFT directly on the first layer of INCA/1 followed by a small-size two-layer feed-forward neural network and a simple post-processing majority voting stage.

  9. ARTIFICIAL NEURAL NETWORK FOR MODELS OF HUMAN OPERATOR

    Directory of Open Access Journals (Sweden)

    Martin Ruzek

    2017-12-01

    Full Text Available This paper presents a new approach to mental functions modeling with the use of artificial neural networks. The artificial neural networks seems to be a promising method for the modeling of a human operator because the architecture of the ANN is directly inspired by the biological neuron. On the other hand, the classical paradigms of artificial neural networks are not suitable because they simplify too much the real processes in biological neural network. The search for a compromise between the complexity of biological neural network and the practical feasibility of the artificial network led to a new learning algorithm. This algorithm is based on the classical multilayered neural network; however, the learning rule is different. The neurons are updating their parameters in a way that is similar to real biological processes. The basic idea is that the neurons are competing for resources and the criterion to decide which neuron will survive is the usefulness of the neuron to the whole neural network. The neuron is not using "teacher" or any kind of superior system, the neuron receives only the information that is present in the biological system. The learning process can be seen as searching of some equilibrium point that is equal to a state with maximal importance of the neuron for the neural network. This position can change if the environment changes. The name of this type of learning, the homeostatic artificial neural network, originates from this idea, as it is similar to the process of homeostasis known in any living cell. The simulation results suggest that this type of learning can be useful also in other tasks of artificial learning and recognition.

  10. Finding strong lenses in CFHTLS using convolutional neural networks

    Science.gov (United States)

    Jacobs, C.; Glazebrook, K.; Collett, T.; More, A.; McCarthy, C.

    2017-10-01

    We train and apply convolutional neural networks, a machine learning technique developed to learn from and classify image data, to Canada-France-Hawaii Telescope Legacy Survey (CFHTLS) imaging for the identification of potential strong lensing systems. An ensemble of four convolutional neural networks was trained on images of simulated galaxy-galaxy lenses. The training sets consisted of a total of 62 406 simulated lenses and 64 673 non-lens negative examples generated with two different methodologies. An ensemble of trained networks was applied to all of the 171 deg2 of the CFHTLS wide field image data, identifying 18 861 candidates including 63 known and 139 other potential lens candidates. A second search of 1.4 million early-type galaxies selected from the survey catalogue as potential deflectors, identified 2465 candidates including 117 previously known lens candidates, 29 confirmed lenses/high-quality lens candidates, 266 novel probable or potential lenses and 2097 candidates we classify as false positives. For the catalogue-based search we estimate a completeness of 21-28 per cent with respect to detectable lenses and a purity of 15 per cent, with a false-positive rate of 1 in 671 images tested. We predict a human astronomer reviewing candidates produced by the system would identify 20 probable lenses and 100 possible lenses per hour in a sample selected by the robot. Convolutional neural networks are therefore a promising tool for use in the search for lenses in current and forthcoming surveys such as the Dark Energy Survey and the Large Synoptic Survey Telescope.

  11. Continuous Online Sequence Learning with an Unsupervised Neural Network Model.

    Science.gov (United States)

    Cui, Yuwei; Ahmad, Subutar; Hawkins, Jeff

    2016-09-14

    The ability to recognize and predict temporal sequences of sensory inputs is vital for survival in natural environments. Based on many known properties of cortical neurons, hierarchical temporal memory (HTM) sequence memory recently has been proposed as a theoretical framework for sequence learning in the cortex. In this letter, we analyze properties of HTM sequence memory and apply it to sequence learning and prediction problems with streaming data. We show the model is able to continuously learn a large number of variableorder temporal sequences using an unsupervised Hebbian-like learning rule. The sparse temporal codes formed by the model can robustly handle branching temporal sequences by maintaining multiple predictions until there is sufficient disambiguating evidence. We compare the HTM sequence memory with other sequence learning algorithms, including statistical methods: autoregressive integrated moving average; feedforward neural networks-time delay neural network and online sequential extreme learning machine; and recurrent neural networks-long short-term memory and echo-state networks on sequence prediction problems with both artificial and real-world data. The HTM model achieves comparable accuracy to other state-of-the-art algorithms. The model also exhibits properties that are critical for sequence learning, including continuous online learning, the ability to handle multiple predictions and branching sequences with high-order statistics, robustness to sensor noise and fault tolerance, and good performance without task-specific hyperparameter tuning. Therefore, the HTM sequence memory not only advances our understanding of how the brain may solve the sequence learning problem but is also applicable to real-world sequence learning problems from continuous data streams.

  12. Decorrelated jet substructure tagging using adversarial neural networks

    Science.gov (United States)

    Shimmin, Chase; Sadowski, Peter; Baldi, Pierre; Weik, Edison; Whiteson, Daniel; Goul, Edward; Søgaard, Andreas

    2017-10-01

    We describe a strategy for constructing a neural network jet substructure tagger which powerfully discriminates boosted decay signals while remaining largely uncorrelated with the jet mass. This reduces the impact of systematic uncertainties in background modeling while enhancing signal purity, resulting in improved discovery significance relative to existing taggers. The network is trained using an adversarial strategy, resulting in a tagger that learns to balance classification accuracy with decorrelation. As a benchmark scenario, we consider the case where large-radius jets originating from a boosted resonance decay are discriminated from a background of nonresonant quark and gluon jets. We show that in the presence of systematic uncertainties on the background rate, our adversarially trained, decorrelated tagger considerably outperforms a conventionally trained neural network, despite having a slightly worse signal-background separation power. We generalize the adversarial training technique to include a parametric dependence on the signal hypothesis, training a single network that provides optimized, interpolatable decorrelated jet tagging across a continuous range of hypothetical resonance masses, after training on discrete choices of the signal mass.

  13. Evolving Neural Networks for the Classification of Galaxies

    Energy Technology Data Exchange (ETDEWEB)

    Cantu-Paz, E; Kamath, C

    2002-01-23

    The FIRST survey (Faint Images of the Radio Sky at Twenty-cm) is scheduled to cover 10,000 square degrees of the northern and southern galactic caps. Until recently, astronomers classified radio-emitting galaxies through a visual inspection of FIRST images. Besides being subjective, prone to error and tedious, this manual approach is becoming infeasible: upon completion, FIRST will include almost a million galaxies. This paper describes the application of six methods of evolving neural networks (NNs) with genetic algorithms (GAs) to identify bent-double galaxies. The objective is to demonstrate that GAs can successfully address some common problems in the application of NNs to classification problems, such as training the networks, choosing appropriate network topologies, and selecting relevant features. The results indicate that most of the methods perform equally well on our data, but the feature selection method gives superior results.

  14. Bayesian Recurrent Neural Network for Language Modeling.

    Science.gov (United States)

    Chien, Jen-Tzung; Ku, Yuan-Chu

    2016-02-01

    A language model (LM) is calculated as the probability of a word sequence that provides the solution to word prediction for a variety of information systems. A recurrent neural network (RNN) is powerful to learn the large-span dynamics of a word sequence in the continuous space. However, the training of the RNN-LM is an ill-posed problem because of too many parameters from a large dictionary size and a high-dimensional hidden layer. This paper presents a Bayesian approach to regularize the RNN-LM and apply it for continuous speech recognition. We aim to penalize the too complicated RNN-LM by compensating for the uncertainty of the estimated model parameters, which is represented by a Gaussian prior. The objective function in a Bayesian classification network is formed as the regularized cross-entropy error function. The regularized model is constructed not only by calculating the regularized parameters according to the maximum a posteriori criterion but also by estimating the Gaussian hyperparameter by maximizing the marginal likelihood. A rapid approximation to a Hessian matrix is developed to implement the Bayesian RNN-LM (BRNN-LM) by selecting a small set of salient outer-products. The proposed BRNN-LM achieves a sparser model than the RNN-LM. Experiments on different corpora show the robustness of system performance by applying the rapid BRNN-LM under different conditions.

  15. Toward Petascale Biologically Plausible Neural Networks

    Science.gov (United States)

    Long, Lyle

    This talk will describe an approach to achieving petascale neural networks. Artificial intelligence has been oversold for many decades. Computers in the beginning could only do about 16,000 operations per second. Computer processing power, however, has been doubling every two years thanks to Moore's law, and growing even faster due to massively parallel architectures. Finally, 60 years after the first AI conference we have computers on the order of the performance of the human brain (1016 operations per second). The main issues now are algorithms, software, and learning. We have excellent models of neurons, such as the Hodgkin-Huxley model, but we do not know how the human neurons are wired together. With careful attention to efficient parallel computing, event-driven programming, table lookups, and memory minimization massive scale simulations can be performed. The code that will be described was written in C + + and uses the Message Passing Interface (MPI). It uses the full Hodgkin-Huxley neuron model, not a simplified model. It also allows arbitrary network structures (deep, recurrent, convolutional, all-to-all, etc.). The code is scalable, and has, so far, been tested on up to 2,048 processor cores using 107 neurons and 109 synapses.

  16. Enhancing optical communication with deep neural networks

    Science.gov (United States)

    Lohani, Sanjaya; Knutson, Erin; Tkach, Sam; Huver, Sean; Glasser, Ryan; Tulane University Collaboration; Deep Science AI Collaboration

    The spatial profile of optical modes may be altered such that they contain nonzero orbital angular momentum (OAM). Laguerre-Gauss (LG) states of light have a helical wavefront and well-defined OAM, and have recently been shown to allow for larger information transfer rates in optical communications as compared to using only Gaussian modes. A primary difficulty, however, is the accurate classification of different OAM optical states, which contain different values of OAM, in the detection stage. The difficulty in this differentiation increases as larger degrees of OAM are used. Here we show the performance of deep neural networks in the simultaneous classification of numerically generated, noisy, Laguerre-Gauss states with OAM value up to 100 can reach near 100% accuracy. This method relies only on the intensity profile of the detected OAM states, avoiding bulky and difficult-to-implement methods that are required to measure the phase profile of the modes in the receiver of the communication platform. This allows for a simplification in the network design and an increase in performance when using states with large degrees of OAM. We anticipate that this approach will allow for significant advances in the development of optical communication technologies. We acknowledge funding from the Louisiana State Board of Regents and Northrop Grumman - NG NEXT.

  17. Forecasting Water Levels Using Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Shreenivas N. Londhe

    2011-06-01

    Full Text Available For all Ocean related activities it is necessary to predict the actual water levels as accurate as possible. The present work aims at predicting the water levels with a lead time of few hours to a day using the technique of artificial neural networks. Instead of using the previous and current values of observed water level time series directly as input and output the water level anomaly (difference between the observed water level and harmonically predicted tidal level is calculated for each hour and the ANN model is developed using this time series. The network predicted anomaly is then added to harmonic tidal level to predict the water levels. The exercise is carried out at six locations, two in The Gulf of Mexico, two in The Gulf of Maine and two in The Gulf of Alaska along the USA coastline. The ANN models performed reasonably well for all forecasting intervals at all the locations. The ANN models were also run in real time mode for a period of eight months. Considering the hurricane season in Gulf of Mexico the models were also tested particularly during hurricanes.

  18. Prediction of surface distress using neural networks

    Science.gov (United States)

    Hamdi, Hadiwardoyo, Sigit P.; Correia, A. Gomes; Pereira, Paulo; Cortez, Paulo

    2017-06-01

    Road infrastructures contribute to a healthy economy throughout a sustainable distribution of goods and services. A road network requires appropriately programmed maintenance treatments in order to keep roads assets in good condition, providing maximum safety for road users under a cost-effective approach. Surface Distress is the key element to identify road condition and may be generated by many different factors. In this paper, a new approach is aimed to predict Surface Distress Index (SDI) values following a data-driven approach. Later this model will be accordingly applied by using data obtained from the Integrated Road Management System (IRMS) database. Artificial Neural Networks (ANNs) are used to predict SDI index using input variables related to the surface of distress, i.e., crack area and width, pothole, rutting, patching and depression. The achieved results show that ANN is able to predict SDI with high correlation factor (R2 = 0.996%). Moreover, a sensitivity analysis was applied to the ANN model, revealing the influence of the most relevant input parameters for SDI prediction, namely rutting (59.8%), crack width (29.9%) and crack area (5.0%), patching (3.0%), pothole (1.7%) and depression (0.3%).

  19. Reject mechanisms for massively parallel neural network character recognition systems

    Science.gov (United States)

    Garris, Michael D.; Wilson, Charles L.

    1992-12-01

    Two reject mechanisms are compared using a massively parallel character recognition system implemented at NIST. The recognition system was designed to study the feasibility of automatically recognizing hand-printed text in a loosely constrained environment. The first method is a simple scalar threshold on the output activation of the winning neurode from the character classifier network. The second method uses an additional neural network trained on all outputs from the character classifier network to accept or reject assigned classifications. The neural network rejection method was expected to perform with greater accuracy than the scalar threshold method, but this was not supported by the test results presented. The scalar threshold method, even though arbitrary, is shown to be a viable reject mechanism for use with neural network character classifiers. Upon studying the performance of the neural network rejection method, analyses show that the two neural networks, the character classifier network and the rejection network, perform very similarly. This can be explained by the strong non-linear function of the character classifier network which effectively removes most of the correlation between character accuracy and all activations other than the winning activation. This suggests that any effective rejection network must receive information from the system which has not been filtered through the non-linear classifier.

  20. ESTIMATION OF PV MODULE SURFACE TEMPERATURE USING ARTIFICIAL NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    Can Coskun

    2016-12-01

    Full Text Available This study aimed to use the artificial neural network (ANN method to estimate the surface temperature of a photovoltaic (PV panel. Using the experimentally obtained PV data, the accuracy of the ANN model was evaluated. To train the artificial neural network (ANN, outer temperature solar radiation and wind speed values were inputs and surface temperature was an output. The ANN was used to estimate PV panel surface temperature. Using the Levenberg-Marquardt (LM algorithm the feed forward artificial neural network was trained. Two back propagation type ANN algorithms were used and their performance was compared with the estimate from the LM algorithm. To train the artificial neural network, experimental data were used for two thirds with the remaining third used for testing. Additionally scaled conjugate gradient (SCG back propagation and resilient back propagation (RB type ANN algorithms were used for comparison with the LM algorithm. The performances of these three types of artificial neural network were compared and mean error rates of between 0.005962 and 0.012177% were obtained. The best estimate was produced by the LM algorithm. Estimation of PV surface temperature with artificial neural networks provides better results than conventional correlation methods. This study showed that artificial neural networks may be effectively used to estimate PV surface temperature.

  1. Hybrid neural network bushing model for vehicle dynamics simulation

    Energy Technology Data Exchange (ETDEWEB)

    Sohn, Jeong Hyun [Pukyong National University, Busan (Korea, Republic of); Lee, Seung Kyu [Hyosung Corporation, Changwon (Korea, Republic of); Yoo, Wan Suk [Pusan National University, Busan (Korea, Republic of)

    2008-12-15

    Although the linear model was widely used for the bushing model in vehicle suspension systems, it could not express the nonlinear characteristics of bushing in terms of the amplitude and the frequency. An artificial neural network model was suggested to consider the hysteretic responses of bushings. This model, however, often diverges due to the uncertainties of the neural network under the unexpected excitation inputs. In this paper, a hybrid neural network bushing model combining linear and neural network is suggested. A linear model was employed to represent linear stiffness and damping effects, and the artificial neural network algorithm was adopted to take into account the hysteretic responses. A rubber test was performed to capture bushing characteristics, where sine excitation with different frequencies and amplitudes is applied. Random test results were used to update the weighting factors of the neural network model. It is proven that the proposed model has more robust characteristics than a simple neural network model under step excitation input. A full car simulation was carried out to verify the proposed bushing models. It was shown that the hybrid model results are almost identical to the linear model under several maneuvers

  2. Digital associative memory neural network with optical learning capability

    Science.gov (United States)

    Watanabe, Minoru; Ohtsubo, Junji

    1994-12-01

    A digital associative memory neural network system with optical learning and recalling capabilities is proposed by using liquid crystal television spatial light modulators and an Optic RAM detector. In spite of the drawback of the limited memory capacity compared with optical analogue associative memory neural network, the proposed optical digital neural network has the advantage of all optical learning and recalling capabilities, thus an all optics network system is easily realized. Some experimental results of the learning and the recalling for character recognitions are presented. This new optical architecture offers compactness of the system and the fast learning and recalling properties. Based on the results, the practical system for the implementation of a faster optical digital associative memory neural network system with ferro-electric liquid crystal SLMs is also proposed.

  3. ECO INVESTMENT PROJECT MANAGEMENT THROUGH TIME APPLYING ARTIFICIAL NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    Tamara Gvozdenović

    2007-06-01

    Full Text Available he concept of project management expresses an indispensable approach to investment projects. Time is often the most important factor in these projects. The artificial neural network is the paradigm of data processing, which is inspired by the one used by the biological brain, and it is used in numerous, different fields, among which is the project management. This research is oriented to application of artificial neural networks in managing time of investment project. The artificial neural networks are used to define the optimistic, the most probable and the pessimistic time in PERT method. The program package Matlab: Neural Network Toolbox is used in data simulation. The feed-forward back propagation network is chosen.

  4. Composing Music with Grammar Argumented Neural Networks and Note-Level Encoding

    OpenAIRE

    Sun, Zheng; Liu, Jiaqi; Zhang, Zewang; Chen, Jingwen; Huo, Zhao; Lee, Ching Hua; Zhang, Xiao

    2016-01-01

    Creating aesthetically pleasing pieces of art, including music, has been a long-term goal for artificial intelligence research. Despite recent successes of long-short term memory (LSTM) recurrent neural networks (RNNs) in sequential learning, LSTM neural networks have not, by themselves, been able to generate natural-sounding music conforming to music theory. To transcend this inadequacy, we put forward a novel method for music composition that combines the LSTM with Grammars motivated by mus...

  5. Optogenetics in Silicon: A Neural Processor for Predicting Optically Active Neural Networks.

    Science.gov (United States)

    Junwen Luo; Nikolic, Konstantin; Evans, Benjamin D; Na Dong; Xiaohan Sun; Andras, Peter; Yakovlev, Alex; Degenaar, Patrick

    2017-02-01

    We present a reconfigurable neural processor for real-time simulation and prediction of opto-neural behaviour. We combined a detailed Hodgkin-Huxley CA3 neuron integrated with a four-state Channelrhodopsin-2 (ChR2) model into reconfigurable silicon hardware. Our architecture consists of a Field Programmable Gated Array (FPGA) with a custom-built computing data-path, a separate data management system and a memory approach based router. Advancements over previous work include the incorporation of short and long-term calcium and light-dependent ion channels in reconfigurable hardware. Also, the developed processor is computationally efficient, requiring only 0.03 ms processing time per sub-frame for a single neuron and 9.7 ms for a fully connected network of 500 neurons with a given FPGA frequency of 56.7 MHz. It can therefore be utilized for exploration of closed loop processing and tuning of biologically realistic optogenetic circuitry.

  6. Active Engine Mounting Control Algorithm Using Neural Network

    Directory of Open Access Journals (Sweden)

    Fadly Jashi Darsivan

    2009-01-01

    Full Text Available This paper proposes the application of neural network as a controller to isolate engine vibration in an active engine mounting system. It has been shown that the NARMA-L2 neurocontroller has the ability to reject disturbances from a plant. The disturbance is assumed to be both impulse and sinusoidal disturbances that are induced by the engine. The performance of the neural network controller is compared with conventional PD and PID controllers tuned using Ziegler-Nichols. From the result simulated the neural network controller has shown better ability to isolate the engine vibration than the conventional controllers.

  7. An Evolutionary Optimization Framework for Neural Networks and Neuromorphic Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Schuman, Catherine D [ORNL; Plank, James [University of Tennessee (UT); Disney, Adam [University of Tennessee (UT); Reynolds, John [University of Tennessee (UT)

    2016-01-01

    As new neural network and neuromorphic architectures are being developed, new training methods that operate within the constraints of the new architectures are required. Evolutionary optimization (EO) is a convenient training method for new architectures. In this work, we review a spiking neural network architecture and a neuromorphic architecture, and we describe an EO training framework for these architectures. We present the results of this training framework on four classification data sets and compare those results to other neural network and neuromorphic implementations. We also discuss how this EO framework may be extended to other architectures.

  8. Complex-valued neural networks advances and applications

    CERN Document Server

    Hirose, Akira

    2013-01-01

    Presents the latest advances in complex-valued neural networks by demonstrating the theory in a wide range of applications Complex-valued neural networks is a rapidly developing neural network framework that utilizes complex arithmetic, exhibiting specific characteristics in its learning, self-organizing, and processing dynamics. They are highly suitable for processing complex amplitude, composed of amplitude and phase, which is one of the core concepts in physical systems to deal with electromagnetic, light, sonic/ultrasonic waves as well as quantum waves, namely, electron and

  9. Fast training of neural networks for load forecasting

    Energy Technology Data Exchange (ETDEWEB)

    Agosta, J.M.; Nielsen, N.R.; Andeen, G. [SRI International, Menlo Park, CA (United States)

    1996-10-01

    Predicting load demand (e.g., demand for electric power) in a data-rich environment is basically a regression problem. To be successful, however, any regression technique must take into account the nonlinear nature of the problem. Numerous nonlinear regression methods have become practical, with the availability of more powerful computers. Perhaps the best known of these methods are techniques that have been popularized under the name of neural networks, and the most common of these is the back-propagation neural network (BPNN). This paper explains the advantage of a different nonlinear regression method known as the probabilistic neural network (PNN).

  10. Neural Networks through Shared Maps in Mobile Devices

    Directory of Open Access Journals (Sweden)

    William Raveane

    2014-12-01

    Full Text Available We introduce a hybrid system composed of a convolutional neural network and a discrete graphical model for image recognition. This system improves upon traditional sliding window techniques for analysis of an image larger than the training data by effectively processing the full input scene through the neural network in less time. The final result is then inferred from the neural network output through energy minimization to reach a more precize localization than what traditional maximum value class comparisons yield. These results are apt for applying this process in a mobile device for real time image recognition.

  11. Patterns recognition of electric brain activity using artificial neural networks

    Science.gov (United States)

    Musatov, V. Yu.; Pchelintseva, S. V.; Runnova, A. E.; Hramov, A. E.

    2017-04-01

    An approach for the recognition of various cognitive processes in the brain activity in the perception of ambiguous images. On the basis of developed theoretical background and the experimental data, we propose a new classification of oscillating patterns in the human EEG by using an artificial neural network approach. After learning of the artificial neural network reliably identified cube recognition processes, for example, left-handed or right-oriented Necker cube with different intensity of their edges, construct an artificial neural network based on Perceptron architecture and demonstrate its effectiveness in the pattern recognition of the EEG in the experimental.

  12. Chaotic Hopfield Neural Network Swarm Optimization and Its Application

    Directory of Open Access Journals (Sweden)

    Yanxia Sun

    2013-01-01

    Full Text Available A new neural network based optimization algorithm is proposed. The presented model is a discrete-time, continuous-state Hopfield neural network and the states of the model are updated synchronously. The proposed algorithm combines the advantages of traditional PSO, chaos and Hopfield neural networks: particles learn from their own experience and the experiences of surrounding particles, their search behavior is ergodic, and convergence of the swarm is guaranteed. The effectiveness of the proposed approach is demonstrated using simulations and typical optimization problems.

  13. Application of artificial neural networks (ANNs) in wine technology.

    Science.gov (United States)

    Baykal, Halil; Yildirim, Hatice Kalkan

    2013-01-01

    In recent years, neural networks have turned out as a powerful method for numerous practical applications in a wide variety of disciplines. In more practical terms neural networks are one of nonlinear statistical data modeling tools. They can be used to model complex relationships between inputs and outputs or to find patterns in data. In food technology artificial neural networks (ANNs) are useful for food safety and quality analyses, predicting chemical, functional and sensory properties of various food products during processing and distribution. In wine technology, ANNs have been used for classification and for predicting wine process conditions. This review discusses the basic ANNs technology and its possible applications in wine technology.

  14. Hidden Neural Networks: A Framework for HMM/NN Hybrids

    DEFF Research Database (Denmark)

    Riis, Søren Kamaric; Krogh, Anders Stærmose

    1997-01-01

    This paper presents a general framework for hybrids of hidden Markov models (HMM) and neural networks (NN). In the new framework called hidden neural networks (HNN) the usual HMM probability parameters are replaced by neural network outputs. To ensure a probabilistic interpretation the HNN...... HMMs on TIMIT continuous speech recognition benchmarks. On the task of recognizing five broad phoneme classes an accuracy of 84% is obtained compared to 76% for a standard HMM. Additionally, we report a preliminary result of 69% accuracy on the TIMIT 39 phoneme task...

  15. Rule extraction from minimal neural networks for credit card screening.

    Science.gov (United States)

    Setiono, Rudy; Baesens, Bart; Mues, Christophe

    2011-08-01

    While feedforward neural networks have been widely accepted as effective tools for solving classification problems, the issue of finding the best network architecture remains unresolved, particularly so in real-world problem settings. We address this issue in the context of credit card screening, where it is important to not only find a neural network with good predictive performance but also one that facilitates a clear explanation of how it produces its predictions. We show that minimal neural networks with as few as one hidden unit provide good predictive accuracy, while having the added advantage of making it easier to generate concise and comprehensible classification rules for the user. To further reduce model size, a novel approach is suggested in which network connections from the input units to this hidden unit are removed by a very straightaway pruning procedure. In terms of predictive accuracy, both the minimized neural networks and the rule sets generated from them are shown to compare favorably with other neural network based classifiers. The rules generated from the minimized neural networks are concise and thus easier to validate in a real-life setting.

  16. Neural network wavelet technology: A frontier of automation

    Science.gov (United States)

    Szu, Harold

    1994-01-01

    Neural networks are an outgrowth of interdisciplinary studies concerning the brain. These studies are guiding the field of Artificial Intelligence towards the, so-called, 6th Generation Computer. Enormous amounts of resources have been poured into R/D. Wavelet Transforms (WT) have replaced Fourier Transforms (FT) in Wideband Transient (WT) cases since the discovery of WT in 1985. The list of successful applications includes the following: earthquake prediction; radar identification; speech recognition; stock market forecasting; FBI finger print image compression; and telecommunication ISDN-data compression.

  17. Fault Tolerant Neural Network for ECG Signal Classification Systems

    Directory of Open Access Journals (Sweden)

    MERAH, M.

    2011-08-01

    Full Text Available The aim of this paper is to apply a new robust hardware Artificial Neural Network (ANN for ECG classification systems. This ANN includes a penalization criterion which makes the performances in terms of robustness. Specifically, in this method, the ANN weights are normalized using the auto-prune method. Simulations performed on the MIT ? BIH ECG signals, have shown that significant robustness improvements are obtained regarding potential hardware artificial neuron failures. Moreover, we show that the proposed design achieves better generalization performances, compared to the standard back-propagation algorithm.

  18. Neural Network Based Intelligent Sootblowing System

    Energy Technology Data Exchange (ETDEWEB)

    Mark Rhode

    2005-04-01

    . Due to the composition of coal, particulate matter is also a by-product of coal combustion. Modern day utility boilers are usually fitted with electrostatic precipitators to aid in the collection of particulate matter. Although extremely efficient, these devices are sensitive to rapid changes in inlet mass concentration as well as total mass loading. Traditionally, utility boilers are equipped with devices known as sootblowers, which use, steam, water or air to dislodge and clean the surfaces within the boiler and are operated based upon established rule or operator's judgment. Poor sootblowing regimes can influence particulate mass loading to the electrostatic precipitators. The project applied a neural network intelligent sootblowing system in conjunction with state-of-the-art controls and instruments to optimize the operation of a utility boiler and systematically control boiler slagging/fouling. This optimization process targeted reduction of NOx of 30%, improved efficiency of 2% and a reduction in opacity of 5%. The neural network system proved to be a non-invasive system which can readily be adapted to virtually any utility boiler. Specific conclusions from this neural network application are listed below. These conclusions should be used in conjunction with the specific details provided in the technical discussions of this report to develop a thorough understanding of the process.

  19. Neural Networks of Colored Sequence Synesthesia

    Science.gov (United States)

    Narayan, Manjari; Allen, Genevera I.

    2013-01-01

    Synesthesia is a condition in which normal stimuli can trigger anomalous associations. In this study, we exploit synesthesia to understand how the synesthetic experience can be explained by subtle changes in network properties. Of the many forms of synesthesia, we focus on colored sequence synesthesia, a form in which colors are associated with overlearned sequences, such as numbers and letters (graphemes). Previous studies have characterized synesthesia using resting-state connectivity or stimulus-driven analyses, but it remains unclear how network properties change as synesthetes move from one condition to another. To address this gap, we used functional MRI in humans to identify grapheme-specific brain regions, thereby constructing a functional “synesthetic” network. We then explored functional connectivity of color and grapheme regions during a synesthesia-inducing fMRI paradigm involving rest, auditory grapheme stimulation, and audiovisual grapheme stimulation. Using Markov networks to represent direct relationships between regions, we found that synesthetes had more connections during rest and auditory conditions. We then expanded the network space to include 90 anatomical regions, revealing that synesthetes tightly cluster in visual regions, whereas controls cluster in parietal and frontal regions. Together, these results suggest that synesthetes have increased connectivity between grapheme and color regions, and that synesthetes use visual regions to a greater extent than controls when presented with dynamic grapheme stimulation. These data suggest that synesthesia is better characterized by studying global network dynamics than by individual properties of a single brain region. PMID:23986245

  20. Neural networks of colored sequence synesthesia.

    Science.gov (United States)

    Tomson, Steffie N; Narayan, Manjari; Allen, Genevera I; Eagleman, David M

    2013-08-28

    Synesthesia is a condition in which normal stimuli can trigger anomalous associations. In this study, we exploit synesthesia to understand how the synesthetic experience can be explained by subtle changes in network properties. Of the many forms of synesthesia, we focus on colored sequence synesthesia, a form in which colors are associated with overlearned sequences, such as numbers and letters (graphemes). Previous studies have characterized synesthesia using resting-state connectivity or stimulus-driven analyses, but it remains unclear how network properties change as synesthetes move from one condition to another. To address this gap, we used functional MRI in humans to identify grapheme-specific brain regions, thereby constructing a functional "synesthetic" network. We then explored functional connectivity of color and grapheme regions during a synesthesia-inducing fMRI paradigm involving rest, auditory grapheme stimulation, and audiovisual grapheme stimulation. Using Markov networks to represent direct relationships between regions, we found that synesthetes had more connections during rest and auditory conditions. We then expanded the network space to include 90 anatomical regions, revealing that synesthetes tightly cluster in visual regions, whereas controls cluster in parietal and frontal regions. Together, these results suggest that synesthetes have increased connectivity between grapheme and color regions, and that synesthetes use visual regions to a greater extent than controls when presented with dynamic grapheme stimulation. These data suggest that synesthesia is better characterized by studying global network dynamics than by individual properties of a single brain region.

  1. Neural Schematics as a unified formal graphical representation of large-scale Neural Network Structures

    Directory of Open Access Journals (Sweden)

    Matthias eEhrlich

    2013-10-01

    Full Text Available One of the major outcomes of neuroscientific research are models of Neural Network Structures. Descriptions of these models usually consist of a non-standardized mixture of text, figures, and other means of visual information communication in print media. However, as neuroscience is an interdisciplinary domain by nature, a standardized way of consistently representing models of Neural Network Structures is required. While generic descriptions of such models in textual form have recently been developed, a formalized way of schematically expressing them does not exist to date. Hence, in this paper we present Neural Schematics as a concept inspired by similar approaches from other disciplines for a generic two dimensional representation of said structures. After introducing Neural Network Structures in general, a set of current visualizations of models of Neural Network Structures is reviewed and analyzed for what information they convey and how their elements are rendered. This analysis then allows for the definition of general items and symbols to consistently represent these models as Neural Schematics on a two dimensional plane. We will illustrate the possibilities an agreed upon standard can yield on sampled diagrams transformed into Neural Schematics and an example application for the design and modeling of large-scale Neural Network Structures.

  2. Neural networks supporting autobiographical memory retrieval in post-traumatic stress disorder

    Science.gov (United States)

    Jacques, Peggy L.; Kragel, Philip A.; Rubin, David C.

    2013-01-01

    Post-traumatic stress disorder (PTSD) affects the functional recruitment and connectivity between neural regions during autobiographical memory (AM) retrieval that overlap with default and control networks. Whether such univariate changes relate to potential differences in the contribution of large-scale neural networks supporting cognition in PTSD is unknown. In the current functional MRI (fMRI) study we employ independent component analysis to examine the influence the engagement of neural networks during the recall of personal memories in PTSD (15 participants) compared to non-trauma exposed, healthy controls (14 participants). We found that the PTSD group recruited similar neural networks when compared to controls during AM recall, including default network subsystems and control networks, but there were group differences in the spatial and temporal characteristics of these networks. First, there were spatial differences in the contribution of the anterior and posterior midline across the networks, and with the amygdala in particular for the medial temporal subsystem of the default network. Second, there were temporal differences in the relationship of the medial prefrontal subsystem of the default network, with less temporal coupling of this network during AM retrieval in PTSD relative to controls. These findings suggest that spatial and temporal characteristics of the default and control networks potentially differ in PTSD versus healthy controls, and contribute to altered recall of personal memory. PMID:23483523

  3. Image Filtering with Neural Networks: applications and performance evaluation

    NARCIS (Netherlands)

    Spreeuwers, Lieuwe Jan

    1992-01-01

    A simple and elegant method to design image filters with neural networks is proposed: using small networks that scan the image and perform position invariant filtering. In the theses examples of image filtering with error backpropagation networks for edge detection, image deblurring and noise

  4. Neural network analysis of varying trends in real exchange rates

    NARCIS (Netherlands)

    J.F. Kaashoek (Johan); H.K. van Dijk (Herman)

    1999-01-01

    textabstractIn this paper neural networks are fitted to the real exchange rates of seven industrialized countries. The size and topology of the used networks is found by reducing the size of the network through the use of multiple correlation coefficients, principal component analysis of residuals

  5. Parameterized neural networks for high-energy physics

    Science.gov (United States)

    Baldi, Pierre; Cranmer, Kyle; Faucett, Taylor; Sadowski, Peter; Whiteson, Daniel

    2016-05-01

    We investigate a new structure for machine learning classifiers built with neural networks and applied to problems in high-energy physics by expanding the inputs to include not only measured features but also physics parameters. The physics parameters represent a smoothly varying learning task, and the resulting parameterized classifier can smoothly interpolate between them and replace sets of classifiers trained at individual values. This simplifies the training process and gives improved performance at intermediate values, even for complex problems requiring deep learning. Applications include tools parameterized in terms of theoretical model parameters, such as the mass of a particle, which allow for a single network to provide improved discrimination across a range of masses. This concept is simple to implement and allows for optimized interpolatable results.

  6. Parameterized neural networks for high-energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Baldi, Pierre; Sadowski, Peter [University of California, Department of Computer Science, Irvine, CA (United States); Cranmer, Kyle [NYU, Department of Physics, New York, NY (United States); Faucett, Taylor; Whiteson, Daniel [University of California, Department of Physics and Astronomy, Irvine, CA (United States)

    2016-05-15

    We investigate a new structure for machine learning classifiers built with neural networks and applied to problems in high-energy physics by expanding the inputs to include not only measured features but also physics parameters. The physics parameters represent a smoothly varying learning task, and the resulting parameterized classifier can smoothly interpolate between them and replace sets of classifiers trained at individual values. This simplifies the training process and gives improved performance at intermediate values, even for complex problems requiring deep learning. Applications include tools parameterized in terms of theoretical model parameters, such as the mass of a particle, which allow for a single network to provide improved discrimination across a range of masses. This concept is simple to implement and allows for optimized interpolatable results. (orig.)

  7. Stimulated Deep Neural Network for Speech Recognition

    Science.gov (United States)

    2016-09-08

    approaches yield state-of-the-art performance in a range of tasks, including speech recognition . However, the parameters of the network are hard to analyze...advantage of the smoothness con- straints that stimulated training offers. The approaches are eval- uated on two large vocabulary speech recognition

  8. Attentional neural networks impairment in healthy aging

    NARCIS (Netherlands)

    Vazquez-Marrufo, Manuel; Luisa Benitez, Maria; Rodriguez-Gomez, Guillermo; Galvao-Carmona, Alejandro; Fernandez-Del Olmo, Aaron; Vaquero-Casares, Encarnacion

    2011-01-01

    Introduction. Diverse evidences have shown that the process of natural aging causes a decline in different cognitive functions, including among them the attentional process. Aim. To determine how the healthy aging affects to the different attentional networks. Subjects and methods. Two groups: young

  9. Subgradient-based neural networks for nonsmooth nonconvex optimization problems.

    Science.gov (United States)

    Bian, Wei; Xue, Xiaoping

    2009-06-01

    This paper presents a subgradient-based neural network to solve a nonsmooth nonconvex optimization problem with a nonsmooth nonconvex objective function, a class of affine equality constraints, and a class of nonsmooth convex inequality constraints. The proposed neural network is modeled with a differential inclusion. Under a suitable assumption on the constraint set and a proper assumption on the objective function, it is proved that for a sufficiently large penalty parameter, there exists a unique global solution to the neural network and the trajectory of the network can reach the feasible region in finite time and stay there thereafter. It is proved that the trajectory of the neural network converges to the set which consists of the equilibrium points of the neural network, and coincides with the set which consists of the critical points of the objective function in the feasible region. A condition is given to ensure the convergence to the equilibrium point set in finite time. Moreover, under suitable assumptions, the coincidence between the solution to the differential inclusion and the "slow solution" of it is also proved. Furthermore, three typical examples are given to present the effectiveness of the theoretic results obtained in this paper and the good performance of the proposed neural network.

  10. Robust CDMA multiuser detection using a neural-network approach.

    Science.gov (United States)

    Chuah, Teong Chee; Sharif, B S; Hinton, O R

    2002-01-01

    Abstract-Recently, a robust version of the linear decorrelating detector (LDD) based on the Huber's M-estimation technique has been proposed. In this paper, we first demonstrate the use of a three-layer recurrent neural network (RNN) to implement the LDD without requiring matrix inversion. The key idea is based on minimizing an appropriate computational energy function iteratively. Second, it will be shown that the M-decorrelating detector (MDD) can be implemented by simply incorporating sigmoidal neurons in the first layer of the RNN. A proof of the redundancy of the matrix inversion process is provided and the computational saving in realistic network is highlighted. Third, we illustrate how further performance gain could be achieved for the subspace-based blind MDD by using robust estimates of the signal subspace components in the initial stage. The impulsive noise is modeled using non-Gaussian alpha-stable distributions, which do not include a Gaussian component but facilitate the use of the recently proposed geometric signal-to-noise ratio (G-SNR). The characteristics and performance of the proposed neural-network detectors are investigated by computer simulation.

  11. ANOMALY NETWORK INTRUSION DETECTION SYSTEM BASED ON DISTRIBUTED TIME-DELAY NEURAL NETWORK (DTDNN

    Directory of Open Access Journals (Sweden)

    LAHEEB MOHAMMAD IBRAHIM

    2010-12-01

    Full Text Available In this research, a hierarchical off-line anomaly network intrusion detection system based on Distributed Time-Delay Artificial Neural Network is introduced. This research aims to solve a hierarchical multi class problem in which the type of attack (DoS, U2R, R2L and Probe attack detected by dynamic neural network. The results indicate that dynamic neural nets (Distributed Time-Delay Artificial Neural Network can achieve a high detection rate, where the overall accuracy classification rate average is equal to 97.24%.

  12. Neural Network Based Load Frequency Control for Restructuring ...

    African Journals Online (AJOL)

    Electric load variations can happen independently in both units. Both neural controllers are trained with the back propagation-through-time algorithm. Use of a neural network to model the dynamic system is avoided by introducing the Jacobian matrices of the system in the back propagation chain used in controller training.

  13. Parameter estimation of an aeroelastic aircraft using neural networks

    Indian Academy of Sciences (India)

    Application of neural networks to the problem of aerodynamic modelling and parameter estimation for aeroelastic aircraft is addressed. A neural model capable of predicting generalized force and moment coefficients using measured motion and control variables only, without any need for conventional normal elastic ...

  14. Assessment of neural networks performance in modeling rainfall ...

    African Journals Online (AJOL)

    This paper presents the evaluation of performance of Neural Network (NN) model in predicting the behavioral pattern of rainfall depths of some locations in the North Central zones of Nigeria. The input to the model is the consecutive rainfall depths data obtained from the Nigerian Meteorological (NiMET) Agency. The neural ...

  15. Neural Network Approach to Locating Cryptography in Object Code

    Energy Technology Data Exchange (ETDEWEB)

    Jason L. Wright; Milos Manic

    2009-09-01

    Finding and identifying cryptography is a growing concern in the malware analysis community. In this paper, artificial neural networks are used to classify functional blocks from a disassembled program as being either cryptography related or not. The resulting system, referred to as NNLC (Neural Net for Locating Cryptography) is presented and results of applying this system to various libraries are described.

  16. A neural network based approach to social touch classification

    NARCIS (Netherlands)

    van Wingerden, Siewart; Uebbing, Tobias J.; Jung, Merel Madeleine; Poel, Mannes

    2014-01-01

    Touch is an important interaction modality in social interaction, for instance touch can communicate emotions and can intensify emotions communicated by other modalities. In this paper we explore the use of Neural Networks for the classification of touch. The exploration and assessment of Neural

  17. Human Parsing with Contextualized Convolutional Neural Network.

    Science.gov (United States)

    Liang, Xiaodan; Xu, Chunyan; Shen, Xiaohui; Yang, Jianchao; Tang, Jinhui; Lin, Liang; Yan, Shuicheng

    2016-03-02

    In this work, we address the human parsing task with a novel Contextualized Convolutional Neural Network (Co-CNN) architecture, which well integrates the cross-layer context, global image-level context, semantic edge context, within-super-pixel context and cross-super-pixel neighborhood context into a unified network. Given an input human image, Co-CNN produces the pixel-wise categorization in an end-to-end way. First, the cross-layer context is captured by our basic local-to-global-to-local structure, which hierarchically combines the global semantic information and the local fine details across different convolutional layers. Second, the global image-level label prediction is used as an auxiliary objective in the intermediate layer of the Co-CNN, and its outputs are further used for guiding the feature learning in subsequent convolutional layers to leverage the global imagelevel context. Third, semantic edge context is further incorporated into Co-CNN, where the high-level semantic boundaries are leveraged to guide pixel-wise labeling. Finally, to further utilize the local super-pixel contexts, the within-super-pixel smoothing and cross-super-pixel neighbourhood voting are formulated as natural sub-components of the Co-CNN to achieve the local label consistency in both training and testing process. Comprehensive evaluations on two public datasets well demonstrate the significant superiority of our Co-CNN over other state-of-the-arts for human parsing. In particular, the F-1 score on the large dataset [1] reaches 81:72% by Co-CNN, significantly higher than 62:81% and 64:38% by the state-of-the-art algorithms, MCNN [2] and ATR [1], respectively. By utilizing our newly collected large dataset for training, our Co-CNN can achieve 85:36% in F-1 score.

  18. Neural networks supporting switching, hypothesis testing, and rule application.

    Science.gov (United States)

    Liu, Zhiya; Braunlich, Kurt; Wehe, Hillary S; Seger, Carol A

    2015-10-01

    We identified dynamic changes in recruitment of neural connectivity networks across three phases of a flexible rule learning and set-shifting task similar to the Wisconsin Card Sort Task: switching, rule learning via hypothesis testing, and rule application. During fMRI scanning, subjects viewed pairs of stimuli that differed across four dimensions (letter, color, size, screen location), chose one stimulus, and received feedback. Subjects were informed that the correct choice was determined by a simple unidimensional rule, for example "choose the blue letter". Once each rule had been learned and correctly applied for 4-7 trials, subjects were cued via either negative feedback or visual cues to switch to learning a new rule. Task performance was divided into three phases: Switching (first trial after receiving the switch cue), hypothesis testing (subsequent trials through the last error trial), and rule application (correct responding after the rule was learned). We used both univariate analysis to characterize activity occurring within specific regions of the brain, and a multivariate method, constrained principal component analysis for fMRI (fMRI-CPCA), to investigate how distributed regions coordinate to subserve different processes. As hypothesized, switching was subserved by a limbic network including the ventral striatum, thalamus, and parahippocampal gyrus, in conjunction with cortical salience network regions including the anterior cingulate and frontoinsular cortex. Activity in the ventral striatum was associated with switching regardless of how switching was cued; visually cued shifts were associated with additional visual cortical activity. After switching, as subjects moved into the hypothesis testing phase, a broad fronto-parietal-striatal network (associated with the cognitive control, dorsal attention, and salience networks) increased in activity. This network was sensitive to rule learning speed, with greater extended activity for the slowest

  19. Guidance for the verification and validation of neural networks

    CERN Document Server

    Pullum, L; Darrah, M

    2007-01-01

    Guidance for the Verification and Validation of Neural Networks is a supplement to the IEEE Standard for Software Verification and Validation, IEEE Std 1012-1998. Born out of a need by the National Aeronautics and Space Administration's safety- and mission-critical research, this book compiles over five years of applied research and development efforts. It is intended to assist the performance of verification and validation (V&V) activities on adaptive software systems, with emphasis given to neural network systems. The book discusses some of the difficulties with trying to assure adaptive systems in general, presents techniques and advice for the V&V practitioner confronted with such a task, and based on a neural network case study, identifies specific tasking and recommendations for the V&V of neural network systems.

  20. Using machine learning, neural networks and statistics to predict bankruptcy

    NARCIS (Netherlands)

    Pompe, P.P.M.; Feelders, A.J.; Feelders, A.J.

    1997-01-01

    Recent literature strongly suggests that machine learning approaches to classification outperform "classical" statistical methods. We make a comparison between the performance of linear discriminant analysis, classification trees, and neural networks in predicting corporate bankruptcy. Linear