WorldWideScience

Sample records for neural networks provide

  1. Providing Morphological Information for SMT Using Neural Networks

    Directory of Open Access Journals (Sweden)

    Passban Peyman

    2017-06-01

    Full Text Available Treating morphologically complex words (MCWs as atomic units in translation would not yield a desirable result. Such words are complicated constituents with meaningful subunits. A complex word in a morphologically rich language (MRL could be associated with a number of words or even a full sentence in a simpler language, which means the surface form of complex words should be accompanied with auxiliary morphological information in order to provide a precise translation and a better alignment. In this paper we follow this idea and propose two different methods to convey such information for statistical machine translation (SMT models. In the first model we enrich factored SMT engines by introducing a new morphological factor which relies on subword-aware word embeddings. In the second model we focus on the language-modeling component. We explore a subword-level neural language model (NLM to capture sequence-, word- and subword-level dependencies. Our NLM is able to approximate better scores for conditional word probabilities, so the decoder generates more fluent translations. We studied two languages Farsi and German in our experiments and observed significant improvements for both of them.

  2. Neural Networks

    Directory of Open Access Journals (Sweden)

    Schwindling Jerome

    2010-04-01

    Full Text Available This course presents an overview of the concepts of the neural networks and their aplication in the framework of High energy physics analyses. After a brief introduction on the concept of neural networks, the concept is explained in the frame of neuro-biology, introducing the concept of multi-layer perceptron, learning and their use as data classifer. The concept is then presented in a second part using in more details the mathematical approach focussing on typical use cases faced in particle physics. Finally, the last part presents the best way to use such statistical tools in view of event classifers, putting the emphasis on the setup of the multi-layer perceptron. The full article (15 p. corresponding to this lecture is written in french and is provided in the proceedings of the book SOS 2008.

  3. Morphological neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Ritter, G.X.; Sussner, P. [Univ. of Florida, Gainesville, FL (United States)

    1996-12-31

    The theory of artificial neural networks has been successfully applied to a wide variety of pattern recognition problems. In this theory, the first step in computing the next state of a neuron or in performing the next layer neural network computation involves the linear operation of multiplying neural values by their synaptic strengths and adding the results. Thresholding usually follows the linear operation in order to provide for nonlinearity of the network. In this paper we introduce a novel class of neural networks, called morphological neural networks, in which the operations of multiplication and addition are replaced by addition and maximum (or minimum), respectively. By taking the maximum (or minimum) of sums instead of the sum of products, morphological network computation is nonlinear before thresholding. As a consequence, the properties of morphological neural networks are drastically different than those of traditional neural network models. In this paper we consider some of these differences and provide some particular examples of morphological neural network.

  4. Hidden neural networks

    DEFF Research Database (Denmark)

    Krogh, Anders Stærmose; Riis, Søren Kamaric

    1999-01-01

    A general framework for hybrids of hidden Markov models (HMMs) and neural networks (NNs) called hidden neural networks (HNNs) is described. The article begins by reviewing standard HMMs and estimation by conditional maximum likelihood, which is used by the HNN. In the HNN, the usual HMM probability...... parameters are replaced by the outputs of state-specific neural networks. As opposed to many other hybrids, the HNN is normalized globally and therefore has a valid probabilistic interpretation. All parameters in the HNN are estimated simultaneously according to the discriminative conditional maximum...... likelihood criterion. The HNN can be viewed as an undirected probabilistic independence network (a graphical model), where the neural networks provide a compact representation of the clique functions. An evaluation of the HNN on the task of recognizing broad phoneme classes in the TIMIT database shows clear...

  5. Introduction to neural networks

    CERN Document Server

    James, Frederick E

    1994-02-02

    1. Introduction and overview of Artificial Neural Networks. 2,3. The Feed-forward Network as an inverse Problem, and results on the computational complexity of network training. 4.Physics applications of neural networks.

  6. Neural network technologies

    Science.gov (United States)

    Villarreal, James A.

    1991-01-01

    A whole new arena of computer technologies is now beginning to form. Still in its infancy, neural network technology is a biologically inspired methodology which draws on nature's own cognitive processes. The Software Technology Branch has provided a software tool, Neural Execution and Training System (NETS), to industry, government, and academia to facilitate and expedite the use of this technology. NETS is written in the C programming language and can be executed on a variety of machines. Once a network has been debugged, NETS can produce a C source code which implements the network. This code can then be incorporated into other software systems. Described here are various software projects currently under development with NETS and the anticipated future enhancements to NETS and the technology.

  7. Neural Networks: Implementations and Applications

    NARCIS (Netherlands)

    Vonk, E.; Veelenturf, L.P.J.; Jain, L.C.

    1996-01-01

    Artificial neural networks, also called neural networks, have been used successfully in many fields including engineering, science and business. This paper presents the implementation of several neural network simulators and their applications in character recognition and other engineering areas

  8. Neural networks and statistical learning

    CERN Document Server

    Du, Ke-Lin

    2014-01-01

    Providing a broad but in-depth introduction to neural network and machine learning in a statistical framework, this book provides a single, comprehensive resource for study and further research. All the major popular neural network models and statistical learning approaches are covered with examples and exercises in every chapter to develop a practical working understanding of the content. Each of the twenty-five chapters includes state-of-the-art descriptions and important research results on the respective topics. The broad coverage includes the multilayer perceptron, the Hopfield network, associative memory models, clustering models and algorithms, the radial basis function network, recurrent neural networks, principal component analysis, nonnegative matrix factorization, independent component analysis, discriminant analysis, support vector machines, kernel methods, reinforcement learning, probabilistic and Bayesian networks, data fusion and ensemble learning, fuzzy sets and logic, neurofuzzy models, hardw...

  9. Neural Network Ensembles

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Salamon, Peter

    1990-01-01

    We propose several means for improving the performance an training of neural networks for classification. We use crossvalidation as a tool for optimizing network parameters and architecture. We show further that the remaining generalization error can be reduced by invoking ensembles of similar...... networks....

  10. Critical Branching Neural Networks

    Science.gov (United States)

    Kello, Christopher T.

    2013-01-01

    It is now well-established that intrinsic variations in human neural and behavioral activity tend to exhibit scaling laws in their fluctuations and distributions. The meaning of these scaling laws is an ongoing matter of debate between isolable causes versus pervasive causes. A spiking neural network model is presented that self-tunes to critical…

  11. Neural network applications

    Science.gov (United States)

    Padgett, Mary L.; Desai, Utpal; Roppel, T.A.; White, Charles R.

    1993-01-01

    A design procedure is suggested for neural networks which accommodates the inclusion of such knowledge-based systems techniques as fuzzy logic and pairwise comparisons. The use of these procedures in the design of applications combines qualitative and quantitative factors with empirical data to yield a model with justifiable design and parameter selection procedures. The procedure is especially relevant to areas of back-propagation neural network design which are highly responsive to the use of precisely recorded expert knowledge.

  12. Hyperbolic Hopfield neural networks.

    Science.gov (United States)

    Kobayashi, M

    2013-02-01

    In recent years, several neural networks using Clifford algebra have been studied. Clifford algebra is also called geometric algebra. Complex-valued Hopfield neural networks (CHNNs) are the most popular neural networks using Clifford algebra. The aim of this brief is to construct hyperbolic HNNs (HHNNs) as an analog of CHNNs. Hyperbolic algebra is a Clifford algebra based on Lorentzian geometry. In this brief, a hyperbolic neuron is defined in a manner analogous to a phasor neuron, which is a typical complex-valued neuron model. HHNNs share common concepts with CHNNs, such as the angle and energy. However, HHNNs and CHNNs are different in several aspects. The states of hyperbolic neurons do not form a circle, and, therefore, the start and end states are not identical. In the quantized version, unlike complex-valued neurons, hyperbolic neurons have an infinite number of states.

  13. Practical neural network recipies in C++

    CERN Document Server

    Masters

    2014-01-01

    This text serves as a cookbook for neural network solutions to practical problems using C++. It will enable those with moderate programming experience to select a neural network model appropriate to solving a particular problem, and to produce a working program implementing that network. The book provides guidance along the entire problem-solving path, including designing the training set, preprocessing variables, training and validating the network, and evaluating its performance. Though the book is not intended as a general course in neural networks, no background in neural works is assum

  14. Multiprocessor Neural Network in Healthcare.

    Science.gov (United States)

    Godó, Zoltán Attila; Kiss, Gábor; Kocsis, Dénes

    2015-01-01

    A possible way of creating a multiprocessor artificial neural network is by the use of microcontrollers. The RISC processors' high performance and the large number of I/O ports mean they are greatly suitable for creating such a system. During our research, we wanted to see if it is possible to efficiently create interaction between the artifical neural network and the natural nervous system. To achieve as much analogy to the living nervous system as possible, we created a frequency-modulated analog connection between the units. Our system is connected to the living nervous system through 128 microelectrodes. Two-way communication is provided through A/D transformation, which is even capable of testing psychopharmacons. The microcontroller-based analog artificial neural network can play a great role in medical singal processing, such as ECG, EEG etc.

  15. Introduction to Artificial Neural Networks

    DEFF Research Database (Denmark)

    Larsen, Jan

    1999-01-01

    The note addresses introduction to signal analysis and classification based on artificial feed-forward neural networks.......The note addresses introduction to signal analysis and classification based on artificial feed-forward neural networks....

  16. Deconvolution using a neural network

    Energy Technology Data Exchange (ETDEWEB)

    Lehman, S.K.

    1990-11-15

    Viewing one dimensional deconvolution as a matrix inversion problem, we compare a neural network backpropagation matrix inverse with LMS, and pseudo-inverse. This is a largely an exercise in understanding how our neural network code works. 1 ref.

  17. Artificial neural network modelling

    CERN Document Server

    Samarasinghe, Sandhya

    2016-01-01

    This book covers theoretical aspects as well as recent innovative applications of Artificial Neural networks (ANNs) in natural, environmental, biological, social, industrial and automated systems. It presents recent results of ANNs in modelling small, large and complex systems under three categories, namely, 1) Networks, Structure Optimisation, Robustness and Stochasticity 2) Advances in Modelling Biological and Environmental Systems and 3) Advances in Modelling Social and Economic Systems. The book aims at serving undergraduates, postgraduates and researchers in ANN computational modelling. .

  18. Satellite image analysis using neural networks

    Science.gov (United States)

    Sheldon, Roger A.

    1990-01-01

    The tremendous backlog of unanalyzed satellite data necessitates the development of improved methods for data cataloging and analysis. Ford Aerospace has developed an image analysis system, SIANN (Satellite Image Analysis using Neural Networks) that integrates the technologies necessary to satisfy NASA's science data analysis requirements for the next generation of satellites. SIANN will enable scientists to train a neural network to recognize image data containing scenes of interest and then rapidly search data archives for all such images. The approach combines conventional image processing technology with recent advances in neural networks to provide improved classification capabilities. SIANN allows users to proceed through a four step process of image classification: filtering and enhancement, creation of neural network training data via application of feature extraction algorithms, configuring and training a neural network model, and classification of images by application of the trained neural network. A prototype experimentation testbed was completed and applied to climatological data.

  19. Fuzzy neural networks: theory and applications

    Science.gov (United States)

    Gupta, Madan M.

    1994-10-01

    During recent years, significant advances have been made in two distinct technological areas: fuzzy logic and computational neural networks. The theory of fuzzy logic provides a mathematical framework to capture the uncertainties associated with human cognitive processes, such as thinking and reasoning. It also provides a mathematical morphology to emulate certain perceptual and linguistic attributes associated with human cognition. On the other hand, the computational neural network paradigms have evolved in the process of understanding the incredible learning and adaptive features of neuronal mechanisms inherent in certain biological species. Computational neural networks replicate, on a small scale, some of the computational operations observed in biological learning and adaptation. The integration of these two fields, fuzzy logic and neural networks, have given birth to an emerging technological field -- fuzzy neural networks. Fuzzy neural networks, have the potential to capture the benefits of these two fascinating fields, fuzzy logic and neural networks, into a single framework. The intent of this tutorial paper is to describe the basic notions of biological and computational neuronal morphologies, and to describe the principles and architectures of fuzzy neural networks. Towards this goal, we develop a fuzzy neural architecture based upon the notion of T-norm and T-conorm connectives. An error-based learning scheme is described for this neural structure.

  20. Neural network based system for equipment surveillance

    Science.gov (United States)

    Vilim, R.B.; Gross, K.C.; Wegerich, S.W.

    1998-04-28

    A method and system are disclosed for performing surveillance of transient signals of an industrial device to ascertain the operating state. The method and system involves the steps of reading into a memory training data, determining neural network weighting values until achieving target outputs close to the neural network output. If the target outputs are inadequate, wavelet parameters are determined to yield neural network outputs close to the desired set of target outputs and then providing signals characteristic of an industrial process and comparing the neural network output to the industrial process signals to evaluate the operating state of the industrial process. 33 figs.

  1. Fuzzy neural network theory and application

    CERN Document Server

    Liu, Puyin

    2004-01-01

    This book systematically synthesizes research achievements in the field of fuzzy neural networks in recent years. It also provides a comprehensive presentation of the developments in fuzzy neural networks, with regard to theory as well as their application to system modeling and image restoration. Special emphasis is placed on the fundamental concepts and architecture analysis of fuzzy neural networks. The book is unique in treating all kinds of fuzzy neural networks and their learning algorithms and universal approximations, and employing simulation examples which are carefully designed to he

  2. Neural networks with discontinuous/impact activations

    CERN Document Server

    Akhmet, Marat

    2014-01-01

    This book presents as its main subject new models in mathematical neuroscience. A wide range of neural networks models with discontinuities are discussed, including impulsive differential equations, differential equations with piecewise constant arguments, and models of mixed type. These models involve discontinuities, which are natural because huge velocities and short distances are usually observed in devices modeling the networks. A discussion of the models, appropriate for the proposed applications, is also provided. This book also: Explores questions related to the biological underpinning for models of neural networks\\ Considers neural networks modeling using differential equations with impulsive and piecewise constant argument discontinuities Provides all necessary mathematical basics for application to the theory of neural networks Neural Networks with Discontinuous/Impact Activations is an ideal book for researchers and professionals in the field of engineering mathematics that have an interest in app...

  3. Neural networks for triggering

    Energy Technology Data Exchange (ETDEWEB)

    Denby, B. (Fermi National Accelerator Lab., Batavia, IL (USA)); Campbell, M. (Michigan Univ., Ann Arbor, MI (USA)); Bedeschi, F. (Istituto Nazionale di Fisica Nucleare, Pisa (Italy)); Chriss, N.; Bowers, C. (Chicago Univ., IL (USA)); Nesti, F. (Scuola Normale Superiore, Pisa (Italy))

    1990-01-01

    Two types of neural network beauty trigger architectures, based on identification of electrons in jets and recognition of secondary vertices, have been simulated in the environment of the Fermilab CDF experiment. The efficiencies for B's and rejection of background obtained are encouraging. If hardware tests are successful, the electron identification architecture will be tested in the 1991 run of CDF. 10 refs., 5 figs., 1 tab.

  4. Medical image analysis with artificial neural networks.

    Science.gov (United States)

    Jiang, J; Trundle, P; Ren, J

    2010-12-01

    Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging. Copyright © 2010 Elsevier Ltd. All rights reserved.

  5. Drift chamber tracking with neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Lindsey, C.S.; Denby, B.; Haggerty, H.

    1992-10-01

    We discuss drift chamber tracking with a commercial log VLSI neural network chip. Voltages proportional to the drift times in a 4-layer drift chamber were presented to the Intel ETANN chip. The network was trained to provide the intercept and slope of straight tracks traversing the chamber. The outputs were recorded and later compared off line to conventional track fits. Two types of network architectures were studied. Applications of neural network tracking to high energy physics detector triggers is discussed.

  6. Program Aids Simulation Of Neural Networks

    Science.gov (United States)

    Baffes, Paul T.

    1990-01-01

    Computer program NETS - Tool for Development and Evaluation of Neural Networks - provides simulation of neural-network algorithms plus software environment for development of such algorithms. Enables user to customize patterns of connections between layers of network, and provides features for saving weight values of network, providing for more precise control over learning process. Consists of translating problem into format using input/output pairs, designing network configuration for problem, and finally training network with input/output pairs until acceptable error reached. Written in C.

  7. Neural Networks Methodology and Applications

    CERN Document Server

    Dreyfus, Gérard

    2005-01-01

    Neural networks represent a powerful data processing technique that has reached maturity and broad application. When clearly understood and appropriately used, they are a mandatory component in the toolbox of any engineer who wants make the best use of the available data, in order to build models, make predictions, mine data, recognize shapes or signals, etc. Ranging from theoretical foundations to real-life applications, this book is intended to provide engineers and researchers with clear methodologies for taking advantage of neural networks in industrial, financial or banking applications, many instances of which are presented in the book. For the benefit of readers wishing to gain deeper knowledge of the topics, the book features appendices that provide theoretical details for greater insight, and algorithmic details for efficient programming and implementation. The chapters have been written by experts ands seemlessly edited to present a coherent and comprehensive, yet not redundant, practically-oriented...

  8. [Artificial neural networks in Neurosciences].

    Science.gov (United States)

    Porras Chavarino, Carmen; Salinas Martínez de Lecea, José María

    2011-11-01

    This article shows that artificial neural networks are used for confirming the relationships between physiological and cognitive changes. Specifically, we explore the influence of a decrease of neurotransmitters on the behaviour of old people in recognition tasks. This artificial neural network recognizes learned patterns. When we change the threshold of activation in some units, the artificial neural network simulates the experimental results of old people in recognition tasks. However, the main contributions of this paper are the design of an artificial neural network and its operation inspired by the nervous system and the way the inputs are coded and the process of orthogonalization of patterns.

  9. The LILARTI neural network system

    Energy Technology Data Exchange (ETDEWEB)

    Allen, J.D. Jr.; Schell, F.M.; Dodd, C.V.

    1992-10-01

    The material of this Technical Memorandum is intended to provide the reader with conceptual and technical background information on the LILARTI neural network system of detail sufficient to confer an understanding of the LILARTI method as it is presently allied and to facilitate application of the method to problems beyond the scope of this document. Of particular importance in this regard are the descriptive sections and the Appendices which include operating instructions, partial listings of program output and data files, and network construction information.

  10. Analysis of neural networks

    CERN Document Server

    Heiden, Uwe

    1980-01-01

    The purpose of this work is a unified and general treatment of activity in neural networks from a mathematical pOint of view. Possible applications of the theory presented are indica­ ted throughout the text. However, they are not explored in de­ tail for two reasons : first, the universal character of n- ral activity in nearly all animals requires some type of a general approach~ secondly, the mathematical perspicuity would suffer if too many experimental details and empirical peculiarities were interspersed among the mathematical investigation. A guide to many applications is supplied by the references concerning a variety of specific issues. Of course the theory does not aim at covering all individual problems. Moreover there are other approaches to neural network theory (see e.g. Poggio-Torre, 1978) based on the different lev­ els at which the nervous system may be viewed. The theory is a deterministic one reflecting the average be­ havior of neurons or neuron pools. In this respect the essay is writt...

  11. Neural Networks for Optimal Control

    DEFF Research Database (Denmark)

    Sørensen, O.

    1995-01-01

    Two neural networks are trained to act as an observer and a controller, respectively, to control a non-linear, multi-variable process.......Two neural networks are trained to act as an observer and a controller, respectively, to control a non-linear, multi-variable process....

  12. Neural Networks in Control Applications

    DEFF Research Database (Denmark)

    Sørensen, O.

    The intention of this report is to make a systematic examination of the possibilities of applying neural networks in those technical areas, which are familiar to a control engineer. In other words, the potential of neural networks in control applications is given higher priority than a detailed...... examined, and it appears that considering 'normal' neural network models with, say, 500 samples, the problem of over-fitting is neglible, and therefore it is not taken into consideration afterwards. Numerous model types, often met in control applications, are implemented as neural network models...... Kalmann filter) representing state space description. The potentials of neural networks for control of non-linear processes are also examined, focusing on three different groups of control concepts, all considered as generalizations of known linear control concepts to handle also non-linear processes...

  13. Removing Epistemological Bias From Empirical Observation of Neural Networks

    OpenAIRE

    Waldron, Ronan

    1994-01-01

    Also in Proceedings of the International Joint Conference on Neural Networks, Nagoya, Japan. This paper addresses the application of neural network research to a theory of autonomous systems. Neural networks, while enjoying considerable success in autonomous systems applications, have failed to provide a firm theoretical underpinning to neural systems embedded in their natural ecological context. This paper proposes a stochastic formulation of such an embedding. A neural sys...

  14. Estimation of Conditional Quantile using Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1999-01-01

    The problem of estimating conditional quantiles using neural networks is investigated here. A basic structure is developed using the methodology of kernel estimation, and a theory guaranteeing con-sistency on a mild set of assumptions is provided. The constructed structure constitutes a basis...... for the design of a variety of different neural networks, some of which are considered in detail. The task of estimating conditional quantiles is related to Bayes point estimation whereby a broad range of applications within engineering, economics and management can be suggested. Numerical results illustrating...... the capabilities of the elaborated neural network are also given....

  15. An Optoelectronic Neural Network

    Science.gov (United States)

    Neil, Mark A. A.; White, Ian H.; Carroll, John E.

    1990-02-01

    We describe and present results of an optoelectronic neural network processing system. The system uses an algorithm based on the Hebbian learning rule to memorise a set of associated vector pairs. Recall occurs by the processing of the input vector with these stored associations in an incoherent optical vector multiplier using optical polarisation rotating liquid crystal spatial light modulators to store the vectors and an optical polarisation shadow casting technique to perform multiplications. Results are detected on a photodiode array and thresholded electronically by a controlling microcomputer. The processor is shown to work in autoassociative and heteroassociative modes with up to 10 stored memory vectors of length 64 (equivalent to 64 neurons) and a cycle time of 50ms. We discuss the limiting factors at work in this system, how they affect its scalability and the general applicability of its principles to other systems.

  16. Modular representation of layered neural networks.

    Science.gov (United States)

    Watanabe, Chihiro; Hiramatsu, Kaoru; Kashino, Kunio

    2018-01-01

    Layered neural networks have greatly improved the performance of various applications including image processing, speech recognition, natural language processing, and bioinformatics. However, it is still difficult to discover or interpret knowledge from the inference provided by a layered neural network, since its internal representation has many nonlinear and complex parameters embedded in hierarchical layers. Therefore, it becomes important to establish a new methodology by which layered neural networks can be understood. In this paper, we propose a new method for extracting a global and simplified structure from a layered neural network. Based on network analysis, the proposed method detects communities or clusters of units with similar connection patterns. We show its effectiveness by applying it to three use cases. (1) Network decomposition: it can decompose a trained neural network into multiple small independent networks thus dividing the problem and reducing the computation time. (2) Training assessment: the appropriateness of a trained result with a given hyperparameter or randomly chosen initial parameters can be evaluated by using a modularity index. And (3) data analysis: in practical data it reveals the community structure in the input, hidden, and output layers, which serves as a clue for discovering knowledge from a trained neural network. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Neural Networks in Control Applications

    DEFF Research Database (Denmark)

    Sørensen, O.

    The intention of this report is to make a systematic examination of the possibilities of applying neural networks in those technical areas, which are familiar to a control engineer. In other words, the potential of neural networks in control applications is given higher priority than a detailed...... study of the networks themselves. With this end in view the following restrictions have been made: - Amongst numerous neural network structures, only the Multi Layer Perceptron (a feed-forward network) is applied. - Amongst numerous training algorithms, only four algorithms are examined, all...... in a recursive form (sample updating). The simplest is the Back Probagation Error Algorithm, and the most complex is the recursive Prediction Error Method using a Gauss-Newton search direction. - Over-fitting is often considered to be a serious problem when training neural networks. This problem is specifically...

  18. Neural Networks in Control Applications

    DEFF Research Database (Denmark)

    Sørensen, O.

    simulated process and compared. The closing chapter describes some practical experiments, where the different control concepts and training methods are tested on the same practical process operating in very noisy environments. All tests confirm that neural networks also have the potential to be trained......The intention of this report is to make a systematic examination of the possibilities of applying neural networks in those technical areas, which are familiar to a control engineer. In other words, the potential of neural networks in control applications is given higher priority than a detailed...... study of the networks themselves. With this end in view the following restrictions have been made: - Amongst numerous neural network structures, only the Multi Layer Perceptron (a feed-forward network) is applied. - Amongst numerous training algorithms, only four algorithms are examined, all...

  19. A neural network simulation package in CLIPS

    Science.gov (United States)

    Bhatnagar, Himanshu; Krolak, Patrick D.; Mcgee, Brenda J.; Coleman, John

    1990-01-01

    The intrinsic similarity between the firing of a rule and the firing of a neuron has been captured in this research to provide a neural network development system within an existing production system (CLIPS). A very important by-product of this research has been the emergence of an integrated technique of using rule based systems in conjunction with the neural networks to solve complex problems. The systems provides a tool kit for an integrated use of the two techniques and is also extendible to accommodate other AI techniques like the semantic networks, connectionist networks, and even the petri nets. This integrated technique can be very useful in solving complex AI problems.

  20. Neural networks as models of psychopathology.

    Science.gov (United States)

    Aakerlund, L; Hemmingsen, R

    1998-04-01

    Neural network modeling is situated between neurobiology, cognitive science, and neuropsychology. The structural and functional resemblance with biological computation has made artificial neural networks (ANN) useful for exploring the relationship between neurobiology and computational performance, i.e., cognition and behavior. This review provides an introduction to the theory of ANN and how they have linked theories from neurobiology and psychopathology in schizophrenia, affective disorders, and dementia.

  1. Pansharpening by Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Giuseppe Masi

    2016-07-01

    Full Text Available A new pansharpening method is proposed, based on convolutional neural networks. We adapt a simple and effective three-layer architecture recently proposed for super-resolution to the pansharpening problem. Moreover, to improve performance without increasing complexity, we augment the input by including several maps of nonlinear radiometric indices typical of remote sensing. Experiments on three representative datasets show the proposed method to provide very promising results, largely competitive with the current state of the art in terms of both full-reference and no-reference metrics, and also at a visual inspection.

  2. Nonequilibrium landscape theory of neural networks

    Science.gov (United States)

    Yan, Han; Zhao, Lei; Hu, Liang; Wang, Xidi; Wang, Erkang; Wang, Jin

    2013-01-01

    The brain map project aims to map out the neuron connections of the human brain. Even with all of the wirings mapped out, the global and physical understandings of the function and behavior are still challenging. Hopfield quantified the learning and memory process of symmetrically connected neural networks globally through equilibrium energy. The energy basins of attractions represent memories, and the memory retrieval dynamics is determined by the energy gradient. However, the realistic neural networks are asymmetrically connected, and oscillations cannot emerge from symmetric neural networks. Here, we developed a nonequilibrium landscape–flux theory for realistic asymmetrically connected neural networks. We uncovered the underlying potential landscape and the associated Lyapunov function for quantifying the global stability and function. We found the dynamics and oscillations in human brains responsible for cognitive processes and physiological rhythm regulations are determined not only by the landscape gradient but also by the flux. We found that the flux is closely related to the degrees of the asymmetric connections in neural networks and is the origin of the neural oscillations. The neural oscillation landscape shows a closed-ring attractor topology. The landscape gradient attracts the network down to the ring. The flux is responsible for coherent oscillations on the ring. We suggest the flux may provide the driving force for associations among memories. We applied our theory to rapid-eye movement sleep cycle. We identified the key regulation factors for function through global sensitivity analysis of landscape topography against wirings, which are in good agreements with experiments. PMID:24145451

  3. Neural-like growing networks

    Science.gov (United States)

    Yashchenko, Vitaliy A.

    2000-03-01

    On the basis of the analysis of scientific ideas reflecting the law in the structure and functioning the biological structures of a brain, and analysis and synthesis of knowledge, developed by various directions in Computer Science, also there were developed the bases of the theory of a new class neural-like growing networks, not having the analogue in world practice. In a base of neural-like growing networks the synthesis of knowledge developed by classical theories - semantic and neural of networks is. The first of them enable to form sense, as objects and connections between them in accordance with construction of the network. With thus each sense gets a separate a component of a network as top, connected to other tops. In common it quite corresponds to structure reflected in a brain, where each obvious concept is presented by certain structure and has designating symbol. Secondly, this network gets increased semantic clearness at the expense owing to formation not only connections between neural by elements, but also themselves of elements as such, i.e. here has a place not simply construction of a network by accommodation sense structures in environment neural of elements, and purely creation of most this environment, as of an equivalent of environment of memory. Thus neural-like growing networks are represented by the convenient apparatus for modeling of mechanisms of teleological thinking, as a fulfillment of certain psychophysiological of functions.

  4. Bayesian regularization of neural networks.

    Science.gov (United States)

    Burden, Frank; Winkler, Dave

    2008-01-01

    Bayesian regularized artificial neural networks (BRANNs) are more robust than standard back-propagation nets and can reduce or eliminate the need for lengthy cross-validation. Bayesian regularization is a mathematical process that converts a nonlinear regression into a "well-posed" statistical problem in the manner of a ridge regression. The advantage of BRANNs is that the models are robust and the validation process, which scales as O(N2) in normal regression methods, such as back propagation, is unnecessary. These networks provide solutions to a number of problems that arise in QSAR modeling, such as choice of model, robustness of model, choice of validation set, size of validation effort, and optimization of network architecture. They are difficult to overtrain, since evidence procedures provide an objective Bayesian criterion for stopping training. They are also difficult to overfit, because the BRANN calculates and trains on a number of effective network parameters or weights, effectively turning off those that are not relevant. This effective number is usually considerably smaller than the number of weights in a standard fully connected back-propagation neural net. Automatic relevance determination (ARD) of the input variables can be used with BRANNs, and this allows the network to "estimate" the importance of each input. The ARD method ensures that irrelevant or highly correlated indices used in the modeling are neglected as well as showing which are the most important variables for modeling the activity data. This chapter outlines the equations that define the BRANN method plus a flowchart for producing a BRANN-QSAR model. Some results of the use of BRANNs on a number of data sets are illustrated and compared with other linear and nonlinear models.

  5. Artificial neural networks a practical course

    CERN Document Server

    da Silva, Ivan Nunes; Andrade Flauzino, Rogerio; Liboni, Luisa Helena Bartocci; dos Reis Alves, Silas Franco

    2017-01-01

    This book provides comprehensive coverage of neural networks, their evolution, their structure, the problems they can solve, and their applications. The first half of the book looks at theoretical investigations on artificial neural networks and addresses the key architectures that are capable of implementation in various application scenarios. The second half is designed specifically for the production of solutions using artificial neural networks to solve practical problems arising from different areas of knowledge. It also describes the various implementation details that were taken into account to achieve the reported results. These aspects contribute to the maturation and improvement of experimental techniques to specify the neural network architecture that is most appropriate for a particular application scope. The book is appropriate for students in graduate and upper undergraduate courses in addition to researchers and professionals.

  6. Artificial Neural Networks·

    Indian Academy of Sciences (India)

    differences between biological neural networks (BNNs) of the brain and ANN s. A thorough understanding of ... neurons. Artificial neural models are loosely based on biology since a complete understanding of the .... A learning scheme for updating a neuron's connections (weights) was proposed by Donald Hebb in 1949.

  7. Memristor-based neural networks

    Science.gov (United States)

    Thomas, Andy

    2013-03-01

    The synapse is a crucial element in biological neural networks, but a simple electronic equivalent has been absent. This complicates the development of hardware that imitates biological architectures in the nervous system. Now, the recent progress in the experimental realization of memristive devices has renewed interest in artificial neural networks. The resistance of a memristive system depends on its past states and exactly this functionality can be used to mimic the synaptic connections in a (human) brain. After a short introduction to memristors, we present and explain the relevant mechanisms in a biological neural network, such as long-term potentiation and spike time-dependent plasticity, and determine the minimal requirements for an artificial neural network. We review the implementations of these processes using basic electric circuits and more complex mechanisms that either imitate biological systems or could act as a model system for them.

  8. Pansharpening by Convolutional Neural Networks

    National Research Council Canada - National Science Library

    Masi, Giuseppe; Cozzolino, Davide; Verdoliva, Luisa; Scarpa, Giuseppe

    2016-01-01

    A new pansharpening method is proposed, based on convolutional neural networks. We adapt a simple and effective three-layer architecture recently proposed for super-resolution to the pansharpening problem...

  9. What are artificial neural networks?

    DEFF Research Database (Denmark)

    Krogh, Anders

    2008-01-01

    Artificial neural networks have been applied to problems ranging from speech recognition to prediction of protein secondary structure, classification of cancers and gene prediction. How do they work and what might they be good for? Udgivelsesdato: 2008-Feb......Artificial neural networks have been applied to problems ranging from speech recognition to prediction of protein secondary structure, classification of cancers and gene prediction. How do they work and what might they be good for? Udgivelsesdato: 2008-Feb...

  10. Biologically Inspired Modular Neural Networks

    OpenAIRE

    Azam, Farooq

    2000-01-01

    This dissertation explores the modular learning in artificial neural networks that mainly driven by the inspiration from the neurobiological basis of the human learning. The presented modularization approaches to the neural network design and learning are inspired by the engineering, complexity, psychological and neurobiological aspects. The main theme of this dissertation is to explore the organization and functioning of the brain to discover new structural and learning ...

  11. Medical Text Classification using Convolutional Neural Networks

    OpenAIRE

    Hughes, Mark; Li, Irene; Kotoulas, Spyros; Suzumura, Toyotaro

    2017-01-01

    We present an approach to automatically classify clinical text at a sentence level. We are using deep convolutional neural networks to represent complex features. We train the network on a dataset providing a broad categorization of health information. Through a detailed evaluation, we demonstrate that our method outperforms several approaches widely used in natural language processing tasks by about 15%.

  12. Medical Text Classification Using Convolutional Neural Networks.

    Science.gov (United States)

    Hughes, Mark; Li, Irene; Kotoulas, Spyros; Suzumura, Toyotaro

    2017-01-01

    We present an approach to automatically classify clinical text at a sentence level. We are using deep convolutional neural networks to represent complex features. We train the network on a dataset providing a broad categorization of health information. Through a detailed evaluation, we demonstrate that our method outperforms several approaches widely used in natural language processing tasks by about 15%.

  13. Logarithmic learning for generalized classifier neural network.

    Science.gov (United States)

    Ozyildirim, Buse Melis; Avci, Mutlu

    2014-12-01

    Generalized classifier neural network is introduced as an efficient classifier among the others. Unless the initial smoothing parameter value is close to the optimal one, generalized classifier neural network suffers from convergence problem and requires quite a long time to converge. In this work, to overcome this problem, a logarithmic learning approach is proposed. The proposed method uses logarithmic cost function instead of squared error. Minimization of this cost function reduces the number of iterations used for reaching the minima. The proposed method is tested on 15 different data sets and performance of logarithmic learning generalized classifier neural network is compared with that of standard one. Thanks to operation range of radial basis function included by generalized classifier neural network, proposed logarithmic approach and its derivative has continuous values. This makes it possible to adopt the advantage of logarithmic fast convergence by the proposed learning method. Due to fast convergence ability of logarithmic cost function, training time is maximally decreased to 99.2%. In addition to decrease in training time, classification performance may also be improved till 60%. According to the test results, while the proposed method provides a solution for time requirement problem of generalized classifier neural network, it may also improve the classification accuracy. The proposed method can be considered as an efficient way for reducing the time requirement problem of generalized classifier neural network. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Fuzzy logic and neural networks basic concepts & application

    CERN Document Server

    Alavala, Chennakesava R

    2008-01-01

    About the Book: The primary purpose of this book is to provide the student with a comprehensive knowledge of basic concepts of fuzzy logic and neural networks. The hybridization of fuzzy logic and neural networks is also included. No previous knowledge of fuzzy logic and neural networks is required. Fuzzy logic and neural networks have been discussed in detail through illustrative examples, methods and generic applications. Extensive and carefully selected references is an invaluable resource for further study of fuzzy logic and neural networks. Each chapter is followed by a question bank

  15. Complex-Valued Neural Networks

    CERN Document Server

    Hirose, Akira

    2012-01-01

    This book is the second enlarged and revised edition of the first successful monograph on complex-valued neural networks (CVNNs) published in 2006, which lends itself to graduate and undergraduate courses in electrical engineering, informatics, control engineering, mechanics, robotics, bioengineering, and other relevant fields. In the second edition the recent trends in CVNNs research are included, resulting in e.g. almost a doubled number of references. The parametron invented in 1954 is also referred to with discussion on analogy and disparity. Also various additional arguments on the advantages of the complex-valued neural networks enhancing the difference to real-valued neural networks are given in various sections. The book is useful for those beginning their studies, for instance, in adaptive signal processing for highly functional sensing and imaging, control in unknown and changing environment, robotics inspired by human neural systems, and brain-like information processing, as well as interdisciplina...

  16. Fractional Hopfield Neural Networks: Fractional Dynamic Associative Recurrent Neural Networks.

    Science.gov (United States)

    Pu, Yi-Fei; Yi, Zhang; Zhou, Ji-Liu

    2017-10-01

    This paper mainly discusses a novel conceptual framework: fractional Hopfield neural networks (FHNN). As is commonly known, fractional calculus has been incorporated into artificial neural networks, mainly because of its long-term memory and nonlocality. Some researchers have made interesting attempts at fractional neural networks and gained competitive advantages over integer-order neural networks. Therefore, it is naturally makes one ponder how to generalize the first-order Hopfield neural networks to the fractional-order ones, and how to implement FHNN by means of fractional calculus. We propose to introduce a novel mathematical method: fractional calculus to implement FHNN. First, we implement fractor in the form of an analog circuit. Second, we implement FHNN by utilizing fractor and the fractional steepest descent approach, construct its Lyapunov function, and further analyze its attractors. Third, we perform experiments to analyze the stability and convergence of FHNN, and further discuss its applications to the defense against chip cloning attacks for anticounterfeiting. The main contribution of our work is to propose FHNN in the form of an analog circuit by utilizing a fractor and the fractional steepest descent approach, construct its Lyapunov function, prove its Lyapunov stability, analyze its attractors, and apply FHNN to the defense against chip cloning attacks for anticounterfeiting. A significant advantage of FHNN is that its attractors essentially relate to the neuron's fractional order. FHNN possesses the fractional-order-stability and fractional-order-sensitivity characteristics.

  17. Spiking modular neural networks: A neural network modeling approach for hydrological processes

    National Research Council Canada - National Science Library

    Kamban Parasuraman; Amin Elshorbagy; Sean K. Carey

    2006-01-01

    .... In this study, a novel neural network model called the spiking modular neural networks (SMNNs) is proposed. An SMNN consists of an input layer, a spiking layer, and an associator neural network layer...

  18. A Projection Neural Network for Constrained Quadratic Minimax Optimization.

    Science.gov (United States)

    Liu, Qingshan; Wang, Jun

    2015-11-01

    This paper presents a projection neural network described by a dynamic system for solving constrained quadratic minimax programming problems. Sufficient conditions based on a linear matrix inequality are provided for global convergence of the proposed neural network. Compared with some of the existing neural networks for quadratic minimax optimization, the proposed neural network in this paper is capable of solving more general constrained quadratic minimax optimization problems, and the designed neural network does not include any parameter. Moreover, the neural network has lower model complexities, the number of state variables of which is equal to that of the dimension of the optimization problems. The simulation results on numerical examples are discussed to demonstrate the effectiveness and characteristics of the proposed neural network.

  19. Multigradient for Neural Networks for Equalizers

    Directory of Open Access Journals (Sweden)

    Chulhee Lee

    2003-06-01

    Full Text Available Recently, a new training algorithm, multigradient, has been published for neural networks and it is reported that the multigradient outperforms the backpropagation when neural networks are used as a classifier. When neural networks are used as an equalizer in communications, they can be viewed as a classifier. In this paper, we apply the multigradient algorithm to train the neural networks that are used as equalizers. Experiments show that the neural networks trained using the multigradient noticeably outperforms the neural networks trained by the backpropagation.

  20. Representations in neural network based empirical potentials

    Science.gov (United States)

    Cubuk, Ekin D.; Malone, Brad D.; Onat, Berk; Waterland, Amos; Kaxiras, Efthimios

    2017-07-01

    Many structural and mechanical properties of crystals, glasses, and biological macromolecules can be modeled from the local interactions between atoms. These interactions ultimately derive from the quantum nature of electrons, which can be prohibitively expensive to simulate. Machine learning has the potential to revolutionize materials modeling due to its ability to efficiently approximate complex functions. For example, neural networks can be trained to reproduce results of density functional theory calculations at a much lower cost. However, how neural networks reach their predictions is not well understood, which has led to them being used as a "black box" tool. This lack of understanding is not desirable especially for applications of neural networks in scientific inquiry. We argue that machine learning models trained on physical systems can be used as more than just approximations since they had to "learn" physical concepts in order to reproduce the labels they were trained on. We use dimensionality reduction techniques to study in detail the representation of silicon atoms at different stages in a neural network, which provides insight into how a neural network learns to model atomic interactions.

  1. Implementing Signature Neural Networks with Spiking Neurons.

    Science.gov (United States)

    Carrillo-Medina, José Luis; Latorre, Roberto

    2016-01-01

    Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm-i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data-to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the absence

  2. Implementing Signature Neural Networks with Spiking Neurons

    Science.gov (United States)

    Carrillo-Medina, José Luis; Latorre, Roberto

    2016-01-01

    Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm—i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data—to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the

  3. Tampa Electric Neural Network Sootblowing

    Energy Technology Data Exchange (ETDEWEB)

    Mark A. Rhode

    2003-12-31

    Boiler combustion dynamics change continuously due to several factors including coal quality, boiler loading, ambient conditions, changes in slag/soot deposits and the condition of plant equipment. NO{sub x} formation, Particulate Matter (PM) emissions, and boiler thermal performance are directly affected by the sootblowing practices on a unit. As part of its Power Plant Improvement Initiative program, the US DOE is providing cofunding (DE-FC26-02NT41425) and NETL is the managing agency for this project at Tampa Electric's Big Bend Station. This program serves to co-fund projects that have the potential to increase thermal efficiency and reduce emissions from coal-fired utility boilers. A review of the Big Bend units helped identify intelligent sootblowing as a suitable application to achieve the desired objectives. The existing sootblower control philosophy uses sequential schemes, whose frequency is either dictated by the control room operator or is timed based. The intent of this project is to implement a neural network based intelligent soot-blowing system, in conjunction with state-of-the-art controls and instrumentation, to optimize the operation of a utility boiler and systematically control boiler fouling. Utilizing unique, on-line, adaptive technology, operation of the sootblowers can be dynamically controlled based on real-time events and conditions within the boiler. This could be an extremely cost-effective technology, which has the ability to be readily and easily adapted to virtually any pulverized coal fired boiler. Through unique on-line adaptive technology, Neural Network-based systems optimize the boiler operation by accommodating equipment performance changes due to wear and maintenance activities, adjusting to fluctuations in fuel quality, and improving operating flexibility. The system dynamically adjusts combustion setpoints and bias settings in closed-loop supervisory control to simultaneously reduce NO{sub x} emissions and improve heat

  4. Tampa Electric Neural Network Sootblowing

    Energy Technology Data Exchange (ETDEWEB)

    Mark A. Rhode

    2004-09-30

    Boiler combustion dynamics change continuously due to several factors including coal quality, boiler loading, ambient conditions, changes in slag/soot deposits and the condition of plant equipment. NOx formation, Particulate Matter (PM) emissions, and boiler thermal performance are directly affected by the sootblowing practices on a unit. As part of its Power Plant Improvement Initiative program, the US DOE is providing cofunding (DE-FC26-02NT41425) and NETL is the managing agency for this project at Tampa Electric's Big Bend Station. This program serves to co-fund projects that have the potential to increase thermal efficiency and reduce emissions from coal-fired utility boilers. A review of the Big Bend units helped identify intelligent sootblowing as a suitable application to achieve the desired objectives. The existing sootblower control philosophy uses sequential schemes, whose frequency is either dictated by the control room operator or is timed based. The intent of this project is to implement a neural network based intelligent sootblowing system, in conjunction with state-of-the-art controls and instrumentation, to optimize the operation of a utility boiler and systematically control boiler fouling. Utilizing unique, on-line, adaptive technology, operation of the sootblowers can be dynamically controlled based on real-time events and conditions within the boiler. This could be an extremely cost-effective technology, which has the ability to be readily and easily adapted to virtually any pulverized coal fired boiler. Through unique on-line adaptive technology, Neural Network-based systems optimize the boiler operation by accommodating equipment performance changes due to wear and maintenance activities, adjusting to fluctuations in fuel quality, and improving operating flexibility. The system dynamically adjusts combustion setpoints and bias settings in closed-loop supervisory control to simultaneously reduce NO{sub x} emissions and improve heat rate

  5. Tampa Electric Neural Network Sootblowing

    Energy Technology Data Exchange (ETDEWEB)

    Mark A. Rhode

    2004-03-31

    Boiler combustion dynamics change continuously due to several factors including coal quality, boiler loading, ambient conditions, changes in slag/soot deposits and the condition of plant equipment. NOx formation, Particulate Matter (PM) emissions, and boiler thermal performance are directly affected by the sootblowing practices on a unit. As part of its Power Plant Improvement Initiative program, the US DOE is providing co-funding (DE-FC26-02NT41425) and NETL is the managing agency for this project at Tampa Electric's Big Bend Station. This program serves to co-fund projects that have the potential to increase thermal efficiency and reduce emissions from coal-fired utility boilers. A review of the Big Bend units helped identify intelligent sootblowing as a suitable application to achieve the desired objectives. The existing sootblower control philosophy uses sequential schemes, whose frequency is either dictated by the control room operator or is timed based. The intent of this project is to implement a neural network based intelligent sootblowing system, in conjunction with state-of-the-art controls and instrumentation, to optimize the operation of a utility boiler and systematically control boiler fouling. Utilizing unique, on-line, adaptive technology, operation of the sootblowers can be dynamically controlled based on real-time events and conditions within the boiler. This could be an extremely cost-effective technology, which has the ability to be readily and easily adapted to virtually any pulverized coal fired boiler. Through unique on-line adaptive technology, Neural Network-based systems optimize the boiler operation by accommodating equipment performance changes due to wear and maintenance activities, adjusting to fluctuations in fuel quality, and improving operating flexibility. The system dynamically adjusts combustion setpoints and bias settings in closed-loop supervisory control to simultaneously reduce NO{sub x} emissions and improve heat rate

  6. Generalization performance of regularized neural network models

    DEFF Research Database (Denmark)

    Larsen, Jan; Hansen, Lars Kai

    1994-01-01

    Architecture optimization is a fundamental problem of neural network modeling. The optimal architecture is defined as the one which minimizes the generalization error. This paper addresses estimation of the generalization performance of regularized, complete neural network models. Regularization...

  7. voltage compensation using artificial neural network

    African Journals Online (AJOL)

    Offor Theophilos

    VOLTAGE COMPENSATION USING ARTIFICIAL NEURAL NETWORK: A CASE STUDY OF. RUMUOLA ... using artificial neural network (ANN) controller based dynamic voltage restorer (DVR). ... substation by simulating with sample of average voltage for Omerelu, Waterlines, Rumuola, Shell Industrial and Barracks.

  8. Plant Growth Models Using Artificial Neural Networks

    Science.gov (United States)

    Bubenheim, David

    1997-01-01

    In this paper, we descrive our motivation and approach to devloping models and the neural network architecture. Initial use of the artificial neural network for modeling the single plant process of transpiration is presented.

  9. Neural networks and applications tutorial

    Science.gov (United States)

    Guyon, I.

    1991-09-01

    The importance of neural networks has grown dramatically during this decade. While only a few years ago they were primarily of academic interest, now dozens of companies and many universities are investigating the potential use of these systems and products are beginning to appear. The idea of building a machine whose architecture is inspired by that of the brain has roots which go far back in history. Nowadays, technological advances of computers and the availability of custom integrated circuits, permit simulations of hundreds or even thousands of neurons. In conjunction, the growing interest in learning machines, non-linear dynamics and parallel computation spurred renewed attention in artificial neural networks. Many tentative applications have been proposed, including decision systems (associative memories, classifiers, data compressors and optimizers), or parametric models for signal processing purposes (system identification, automatic control, noise canceling, etc.). While they do not always outperform standard methods, neural network approaches are already used in some real world applications for pattern recognition and signal processing tasks. The tutorial is divided into six lectures, that where presented at the Third Graduate Summer Course on Computational Physics (September 3-7, 1990) on Parallel Architectures and Applications, organized by the European Physical Society: (1) Introduction: machine learning and biological computation. (2) Adaptive artificial neurons (perceptron, ADALINE, sigmoid units, etc.): learning rules and implementations. (3) Neural network systems: architectures, learning algorithms. (4) Applications: pattern recognition, signal processing, etc. (5) Elements of learning theory: how to build networks which generalize. (6) A case study: a neural network for on-line recognition of handwritten alphanumeric characters.

  10. Optoelectronic Implementation of Neural Networks

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 3; Issue 9. Optoelectronic Implementation of Neural Networks - Use of Optics in Computing. R Ramachandran. General Article Volume 3 Issue 9 September 1998 pp 45-55. Fulltext. Click here to view fulltext PDF. Permanent link:

  11. Aphasia Classification Using Neural Networks

    DEFF Research Database (Denmark)

    Axer, H.; Jantzen, Jan; Berks, G.

    2000-01-01

    A web-based software model (http://fuzzy.iau.dtu.dk/aphasia.nsf) was developed as an example for classification of aphasia using neural networks. Two multilayer perceptrons were used to classify the type of aphasia (Broca, Wernicke, anomic, global) according to the results in some subtests...

  12. Memory-optimal neural network approximation

    Science.gov (United States)

    Bölcskei, Helmut; Grohs, Philipp; Kutyniok, Gitta; Petersen, Philipp

    2017-08-01

    We summarize the main results of a recent theory-developed by the authors-establishing fundamental lower bounds on the connectivity and memory requirements of deep neural networks as a function of the complexity of the function class to be approximated by the network. These bounds are shown to be achievable. Specifically, all function classes that are optimally approximated by a general class of representation systems-so-called affine systems-can be approximated by deep neural networks with minimal connectivity and memory requirements. Affine systems encompass a wealth of representation systems from applied harmonic analysis such as wavelets, shearlets, ridgelets, α-shearlets, and more generally α-molecules. This result elucidates a remarkable universality property of deep neural networks and shows that they achieve the optimum approximation properties of all affine systems combined. Finally, we present numerical experiments demonstrating that the standard stochastic gradient descent algorithm generates deep neural networks which provide close-to-optimal approximation rates at minimal connectivity. Moreover, stochastic gradient descent is found to actually learn approximations that are sparse in the representation system optimally sparsifying the function class the network is trained on.

  13. Computationally Efficient Neural Network Intrusion Security Awareness

    Energy Technology Data Exchange (ETDEWEB)

    Todd Vollmer; Milos Manic

    2009-08-01

    An enhanced version of an algorithm to provide anomaly based intrusion detection alerts for cyber security state awareness is detailed. A unique aspect is the training of an error back-propagation neural network with intrusion detection rule features to provide a recognition basis. Network packet details are subsequently provided to the trained network to produce a classification. This leverages rule knowledge sets to produce classifications for anomaly based systems. Several test cases executed on ICMP protocol revealed a 60% identification rate of true positives. This rate matched the previous work, but 70% less memory was used and the run time was reduced to less than 1 second from 37 seconds.

  14. Analysis of neural networks through base functions

    NARCIS (Netherlands)

    van der Zwaag, B.J.; Slump, Cornelis H.; Spaanenburg, L.

    Problem statement. Despite their success-story, neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a "magic tool" but possibly even more

  15. Simplified LQG Control with Neural Networks

    DEFF Research Database (Denmark)

    Sørensen, O.

    1997-01-01

    A new neural network application for non-linear state control is described. One neural network is modelled to form a Kalmann predictor and trained to act as an optimal state observer for a non-linear process. Another neural network is modelled to form a state controller and trained to produce...

  16. Novel quantum inspired binary neural network algorithm

    Indian Academy of Sciences (India)

    In this paper, a quantum based binary neural network algorithm is proposed, named as novel quantum binary neural network algorithm (NQ-BNN). It forms a neural network structure by deciding weights and separability parameter in quantum based manner. Quantum computing concept represents solution probabilistically ...

  17. Cotton genotypes selection through artificial neural networks.

    Science.gov (United States)

    Júnior, E G Silva; Cardoso, D B O; Reis, M C; Nascimento, A F O; Bortolin, D I; Martins, M R; Sousa, L B

    2017-09-27

    Breeding programs currently use statistical analysis to assist in the identification of superior genotypes at various stages of a cultivar's development. Differently from these analyses, the computational intelligence approach has been little explored in genetic improvement of cotton. Thus, this study was carried out with the objective of presenting the use of artificial neural networks as auxiliary tools in the improvement of the cotton to improve fiber quality. To demonstrate the applicability of this approach, this research was carried out using the evaluation data of 40 genotypes. In order to classify the genotypes for fiber quality, the artificial neural networks were trained with replicate data of 20 genotypes of cotton evaluated in the harvests of 2013/14 and 2014/15, regarding fiber length, uniformity of length, fiber strength, micronaire index, elongation, short fiber index, maturity index, reflectance degree, and fiber quality index. This quality index was estimated by means of a weighted average on the determined score (1 to 5) of each characteristic of the HVI evaluated, according to its industry standards. The artificial neural networks presented a high capacity of correct classification of the 20 selected genotypes based on the fiber quality index, so that when using fiber length associated with the short fiber index, fiber maturation, and micronaire index, the artificial neural networks presented better results than using only fiber length and previous associations. It was also observed that to submit data of means of new genotypes to the neural networks trained with data of repetition, provides better results of classification of the genotypes. When observing the results obtained in the present study, it was verified that the artificial neural networks present great potential to be used in the different stages of a genetic improvement program of the cotton, aiming at the improvement of the fiber quality of the future cultivars.

  18. Neural network approaches for noisy language modeling.

    Science.gov (United States)

    Li, Jun; Ouazzane, Karim; Kazemian, Hassan B; Afzal, Muhammad Sajid

    2013-11-01

    Text entry from people is not only grammatical and distinct, but also noisy. For example, a user's typing stream contains all the information about the user's interaction with computer using a QWERTY keyboard, which may include the user's typing mistakes as well as specific vocabulary, typing habit, and typing performance. In particular, these features are obvious in disabled users' typing streams. This paper proposes a new concept called noisy language modeling by further developing information theory and applies neural networks to one of its specific application-typing stream. This paper experimentally uses a neural network approach to analyze the disabled users' typing streams both in general and specific ways to identify their typing behaviors and subsequently, to make typing predictions and typing corrections. In this paper, a focused time-delay neural network (FTDNN) language model, a time gap model, a prediction model based on time gap, and a probabilistic neural network model (PNN) are developed. A 38% first hitting rate (HR) and a 53% first three HR in symbol prediction are obtained based on the analysis of a user's typing history through the FTDNN language modeling, while the modeling results using the time gap prediction model and the PNN model demonstrate that the correction rates lie predominantly in between 65% and 90% with the current testing samples, and 70% of all test scores above basic correction rates, respectively. The modeling process demonstrates that a neural network is a suitable and robust language modeling tool to analyze the noisy language stream. The research also paves the way for practical application development in areas such as informational analysis, text prediction, and error correction by providing a theoretical basis of neural network approaches for noisy language modeling.

  19. One pass learning for generalized classifier neural network.

    Science.gov (United States)

    Ozyildirim, Buse Melis; Avci, Mutlu

    2016-01-01

    Generalized classifier neural network introduced as a kind of radial basis function neural network, uses gradient descent based optimized smoothing parameter value to provide efficient classification. However, optimization consumes quite a long time and may cause a drawback. In this work, one pass learning for generalized classifier neural network is proposed to overcome this disadvantage. Proposed method utilizes standard deviation of each class to calculate corresponding smoothing parameter. Since different datasets may have different standard deviations and data distributions, proposed method tries to handle these differences by defining two functions for smoothing parameter calculation. Thresholding is applied to determine which function will be used. One of these functions is defined for datasets having different range of values. It provides balanced smoothing parameters for these datasets through logarithmic function and changing the operation range to lower boundary. On the other hand, the other function calculates smoothing parameter value for classes having standard deviation smaller than the threshold value. Proposed method is tested on 14 datasets and performance of one pass learning generalized classifier neural network is compared with that of probabilistic neural network, radial basis function neural network, extreme learning machines, and standard and logarithmic learning generalized classifier neural network in MATLAB environment. One pass learning generalized classifier neural network provides more than a thousand times faster classification than standard and logarithmic generalized classifier neural network. Due to its classification accuracy and speed, one pass generalized classifier neural network can be considered as an efficient alternative to probabilistic neural network. Test results show that proposed method overcomes computational drawback of generalized classifier neural network and may increase the classification performance. Copyright

  20. Decoding small surface codes with feedforward neural networks

    Science.gov (United States)

    Varsamopoulos, Savvas; Criger, Ben; Bertels, Koen

    2018-01-01

    Surface codes reach high error thresholds when decoded with known algorithms, but the decoding time will likely exceed the available time budget, especially for near-term implementations. To decrease the decoding time, we reduce the decoding problem to a classification problem that a feedforward neural network can solve. We investigate quantum error correction and fault tolerance at small code distances using neural network-based decoders, demonstrating that the neural network can generalize to inputs that were not provided during training and that they can reach similar or better decoding performance compared to previous algorithms. We conclude by discussing the time required by a feedforward neural network decoder in hardware.

  1. Dynamic properties of cellular neural networks

    Directory of Open Access Journals (Sweden)

    Angela Slavova

    1993-01-01

    Full Text Available Dynamic behavior of a new class of information-processing systems called Cellular Neural Networks is investigated. In this paper we introduce a small parameter in the state equation of a cellular neural network and we seek for periodic phenomena. New approach is used for proving stability of a cellular neural network by constructing Lyapunov's majorizing equations. This algorithm is helpful for finding a map from initial continuous state space of a cellular neural network into discrete output. A comparison between cellular neural networks and cellular automata is made.

  2. Foetal ECG recovery using dynamic neural networks.

    Science.gov (United States)

    Camps-Valls, Gustavo; Martínez-Sober, Marcelino; Soria-Olivas, Emilio; Magdalena-Benedito, Rafael; Calpe-Maravilla, Javier; Guerrero-Martínez, Juan

    2004-07-01

    Non-invasive electrocardiography has proven to be a very interesting method for obtaining information about the foetus state and thus to assure its well-being during pregnancy. One of the main applications in this field is foetal electrocardiogram (ECG) recovery by means of automatic methods. Evident problems found in the literature are the limited number of available registers, the lack of performance indicators, and the limited use of non-linear adaptive methods. In order to circumvent these problems, we first introduce the generation of synthetic registers and discuss the influence of different kinds of noise to the modelling. Second, a method which is based on numerical (correlation coefficient) and statistical (analysis of variance, ANOVA) measures allows us to select the best recovery model. Finally, finite impulse response (FIR) and gamma neural networks are included in the adaptive noise cancellation (ANC) scheme in order to provide highly non-linear, dynamic capabilities to the recovery model. Neural networks are benchmarked with classical adaptive methods such as the least mean squares (LMS) and the normalized LMS (NLMS) algorithms in simulated and real registers and some conclusions are drawn. For synthetic registers, the most determinant factor in the identification of the models is the foetal-maternal signal-to-noise ratio (SNR). In addition, as the electromyogram contribution becomes more relevant, neural networks clearly outperform the LMS-based algorithm. From the ANOVA test, we found statistical differences between LMS-based models and neural models when complex situations (high foetal-maternal and foetal-noise SNRs) were present. These conclusions were confirmed after doing robustness tests on synthetic registers, visual inspection of the recovered signals and calculation of the recognition rates of foetal R-peaks for real situations. Finally, the best compromise between model complexity and outcomes was provided by the FIR neural network. Both

  3. Neural Networks in R Using the Stuttgart Neural Network Simulator: RSNNS

    Directory of Open Access Journals (Sweden)

    Christopher Bergmeir

    2012-01-01

    Full Text Available Neural networks are important standard machine learning procedures for classification and regression. We describe the R package RSNNS that provides a convenient interface to the popular Stuttgart Neural Network Simulator SNNS. The main features are (a encapsulation of the relevant SNNS parts in a C++ class, for sequential and parallel usage of different networks, (b accessibility of all of the SNNSalgorithmic functionality from R using a low-level interface, and (c a high-level interface for convenient, R-style usage of many standard neural network procedures. The package also includes functions for visualization and analysis of the models and the training procedures, as well as functions for data input/output from/to the original SNNSfile formats.

  4. Adaptive training of feedforward neural networks by Kalman filtering

    Energy Technology Data Exchange (ETDEWEB)

    Ciftcioglu, Oe. [Istanbul Technical Univ. (Turkey). Dept. of Electrical Engineering; Tuerkcan, E. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands)

    1995-02-01

    Adaptive training of feedforward neural networks by Kalman filtering is described. Adaptive training is particularly important in estimation by neural network in real-time environmental where the trained network is used for system estimation while the network is further trained by means of the information provided by the experienced/exercised ongoing operation. As result of this, neural network adapts itself to a changing environment to perform its mission without recourse to re-training. The performance of the training method is demonstrated by means of actual process signals from a nuclear power plant. (orig.).

  5. Neural network modeling of emotion

    Science.gov (United States)

    Levine, Daniel S.

    2007-03-01

    This article reviews the history and development of computational neural network modeling of cognitive and behavioral processes that involve emotion. The exposition starts with models of classical conditioning dating from the early 1970s. Then it proceeds toward models of interactions between emotion and attention. Then models of emotional influences on decision making are reviewed, including some speculative (not and not yet simulated) models of the evolution of decision rules. Through the late 1980s, the neural networks developed to model emotional processes were mainly embodiments of significant functional principles motivated by psychological data. In the last two decades, network models of these processes have become much more detailed in their incorporation of known physiological properties of specific brain regions, while preserving many of the psychological principles from the earlier models. Most network models of emotional processes so far have dealt with positive and negative emotion in general, rather than specific emotions such as fear, joy, sadness, and anger. But a later section of this article reviews a few models relevant to specific emotions: one family of models of auditory fear conditioning in rats, and one model of induced pleasure enhancing creativity in humans. Then models of emotional disorders are reviewed. The article concludes with philosophical statements about the essential contributions of emotion to intelligent behavior and the importance of quantitative theories and models to the interdisciplinary enterprise of understanding the interactions of emotion, cognition, and behavior.

  6. MEMBRAIN NEURAL NETWORK FOR VISUAL PATTERN RECOGNITION

    Directory of Open Access Journals (Sweden)

    Artur Popko

    2013-06-01

    Full Text Available Recognition of visual patterns is one of significant applications of Artificial Neural Networks, which partially emulate human thinking in the domain of artificial intelligence. In the paper, a simplified neural approach to recognition of visual patterns is portrayed and discussed. This paper is dedicated for investigators in visual patterns recognition, Artificial Neural Networking and related disciplines. The document describes also MemBrain application environment as a powerful and easy to use neural networks’ editor and simulator supporting ANN.

  7. Identifying Tracks Duplicates via Neural Network

    CERN Document Server

    Sunjerga, Antonio; CERN. Geneva. EP Department

    2017-01-01

    The goal of the project is to study feasibility of state of the art machine learning techniques in track reconstruction. Machine learning techniques provide promising ways to speed up the pattern recognition of tracks by adding more intelligence in the algorithms. Implementation of neural network to process of track duplicates identifying will be discussed. Different approaches are shown and results are compared to method that is currently in use.

  8. Pediatric Nutritional Requirements Determination with Neural Networks

    OpenAIRE

    Karlık, Bekir; Ece, Aydın

    1998-01-01

    To calculate daily nutritional requirements of children, a computer program has been developed based upon neural network. Three parameters, daily protein, energy and water requirements, were calculated through trained artificial neural networks using a database of 312 children The results were compared with those of calculated from dietary requirements tables of World Health Organisation. No significant difference was found between two calculations. In conclusion, a simple neural network may ...

  9. Adaptive optimization and control using neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Mead, W.C.; Brown, S.K.; Jones, R.D.; Bowling, P.S.; Barnes, C.W.

    1993-10-22

    Recent work has demonstrated the ability of neural-network-based controllers to optimize and control machines with complex, non-linear, relatively unknown control spaces. We present a brief overview of neural networks via a taxonomy illustrating some capabilities of different kinds of neural networks. We present some successful control examples, particularly the optimization and control of a small-angle negative ion source.

  10. Neural network controller for underwater work ROV. Suichu sagyoyo ROV no neural network controller

    Energy Technology Data Exchange (ETDEWEB)

    Yoshida, Y.; Kidoshi, H.; Arahata, M.; Shoji, K.; Takahashi, Y. (Ishikawajima-Harima Heavy Industries, Co. Ltd., Tokyo (Japan))

    1993-07-01

    The previous underwater work ROV (remotely operated vehicle) has been controlled manually because its dynamic properties are changeable underwater. Ishikawajima-Harima Heavy Industries (IHI) has applied a neural network to an adaptive controller for the ROV. This paper describes objectives of the research, design of control logic, and tank experiments on a model ROV. For the neural network, manual operation was used to provide the initial learning data for the neural network in order to initialize control parameters for optimization. The model ROV was designed to achieve and maintain constant depth in normal operation. As a consequence of the tank experiments, it was demonstrated that the controller can acquire skill of operators, can further improve the acquired skill of operators, and can construct an automatic control system autonomically even if any dynamic properties are not known. 6 refs., 8 figs.

  11. Neural networks for nuclear spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Keller, P.E.; Kangas, L.J.; Hashem, S.; Kouzes, R.T. [Pacific Northwest Lab., Richland, WA (United States)] [and others

    1995-12-31

    In this paper two applications of artificial neural networks (ANNs) in nuclear spectroscopy analysis are discussed. In the first application, an ANN assigns quality coefficients to alpha particle energy spectra. These spectra are used to detect plutonium contamination in the work environment. The quality coefficients represent the levels of spectral degradation caused by miscalibration and foreign matter affecting the instruments. A set of spectra was labeled with quality coefficients by an expert and used to train the ANN expert system. Our investigation shows that the expert knowledge of spectral quality can be transferred to an ANN system. The second application combines a portable gamma-ray spectrometer with an ANN. In this system the ANN is used to automatically identify, radioactive isotopes in real-time from their gamma-ray spectra. Two neural network paradigms are examined: the linear perception and the optimal linear associative memory (OLAM). A comparison of the two paradigms shows that OLAM is superior to linear perception for this application. Both networks have a linear response and are useful in determining the composition of an unknown sample when the spectrum of the unknown is a linear superposition of known spectra. One feature of this technique is that it uses the whole spectrum in the identification process instead of only the individual photo-peaks. For this reason, it is potentially more useful for processing data from lower resolution gamma-ray spectrometers. This approach has been tested with data generated by Monte Carlo simulations and with field data from sodium iodide and Germanium detectors. With the ANN approach, the intense computation takes place during the training process. Once the network is trained, normal operation consists of propagating the data through the network, which results in rapid identification of samples. This approach is useful in situations that require fast response where precise quantification is less important.

  12. Neural networks and perceptual learning

    Science.gov (United States)

    Tsodyks, Misha; Gilbert, Charles

    2005-01-01

    Sensory perception is a learned trait. The brain strategies we use to perceive the world are constantly modified by experience. With practice, we subconsciously become better at identifying familiar objects or distinguishing fine details in our environment. Current theoretical models simulate some properties of perceptual learning, but neglect the underlying cortical circuits. Future neural network models must incorporate the top-down alteration of cortical function by expectation or perceptual tasks. These newly found dynamic processes are challenging earlier views of static and feedforward processing of sensory information. PMID:15483598

  13. Optimization with Potts Neural Networks

    Science.gov (United States)

    Söderberg, Bo

    The Potts Neural Network approach to non-binary discrete optimization problems is described. It applies to problems that can be described as a set of elementary `multiple choice' options. Instead of the conventional binary (Ising) neurons, mean field Potts neurons, having several available states, are used to describe the elementary degrees of freedom of such problems. The dynamics consists of iterating the mean field equations with annealing until convergence. Due to its deterministic character, the method is quite fast. When applied to problems of Graph Partition and scheduling types, it produces very good solutions also for problems of considerable size.

  14. A Novel Neural Network for Generally Constrained Variational Inequalities.

    Science.gov (United States)

    Gao, Xingbao; Liao, Li-Zhi

    2017-09-01

    This paper presents a novel neural network for solving generally constrained variational inequality problems by constructing a system of double projection equations. By defining proper convex energy functions, the proposed neural network is proved to be stable in the sense of Lyapunov and converges to an exact solution of the original problem for any starting point under the weaker cocoercivity condition or the monotonicity condition of the gradient mapping on the linear equation set. Furthermore, two sufficient conditions are provided to ensure the stability of the proposed neural network for a special case. The proposed model overcomes some shortcomings of existing continuous-time neural networks for constrained variational inequality, and its stability only requires some monotonicity conditions of the underlying mapping and the concavity of nonlinear inequality constraints on the equation set. The validity and transient behavior of the proposed neural network are demonstrated by some simulation results.

  15. Obstacle avoidance for power wheelchair using bayesian neural network.

    Science.gov (United States)

    Trieu, Hoang T; Nguyen, Hung T; Willey, Keith

    2007-01-01

    In this paper we present a real-time obstacle avoidance algorithm using a Bayesian neural network for a laser based wheelchair system. The raw laser data is modified to accommodate the wheelchair dimensions, allowing the free-space to be determined accurately in real-time. Data acquisition is performed to collect the patterns required for training the neural network. A Bayesian frame work is applied to determine the optimal neural network structure for the training data. This neural network is trained under the supervision of the Bayesian rule and the obstacle avoidance task is then implemented for the wheelchair system. Initial results suggest this approach provides an effective solution for autonomous tasks, suggesting Bayesian neural networks may be useful for wider assistive technology applications.

  16. Three dimensional living neural networks

    Science.gov (United States)

    Linnenberger, Anna; McLeod, Robert R.; Basta, Tamara; Stowell, Michael H. B.

    2015-08-01

    We investigate holographic optical tweezing combined with step-and-repeat maskless projection micro-stereolithography for fine control of 3D positioning of living cells within a 3D microstructured hydrogel grid. Samples were fabricated using three different cell lines; PC12, NT2/D1 and iPSC. PC12 cells are a rat cell line capable of differentiation into neuron-like cells NT2/D1 cells are a human cell line that exhibit biochemical and developmental properties similar to that of an early embryo and when exposed to retinoic acid the cells differentiate into human neurons useful for studies of human neurological disease. Finally induced pluripotent stem cells (iPSC) were utilized with the goal of future studies of neural networks fabricated from human iPSC derived neurons. Cells are positioned in the monomer solution with holographic optical tweezers at 1064 nm and then are encapsulated by photopolymerization of polyethylene glycol (PEG) hydrogels formed by thiol-ene photo-click chemistry via projection of a 512x512 spatial light modulator (SLM) illuminated at 405 nm. Fabricated samples are incubated in differentiation media such that cells cease to divide and begin to form axons or axon-like structures. By controlling the position of the cells within the encapsulating hydrogel structure the formation of the neural circuits is controlled. The samples fabricated with this system are a useful model for future studies of neural circuit formation, neurological disease, cellular communication, plasticity, and repair mechanisms.

  17. The Laplacian spectrum of neural networks

    Science.gov (United States)

    de Lange, Siemon C.; de Reus, Marcel A.; van den Heuvel, Martijn P.

    2014-01-01

    The brain is a complex network of neural interactions, both at the microscopic and macroscopic level. Graph theory is well suited to examine the global network architecture of these neural networks. Many popular graph metrics, however, encode average properties of individual network elements. Complementing these “conventional” graph metrics, the eigenvalue spectrum of the normalized Laplacian describes a network's structure directly at a systems level, without referring to individual nodes or connections. In this paper, the Laplacian spectra of the macroscopic anatomical neuronal networks of the macaque and cat, and the microscopic network of the Caenorhabditis elegans were examined. Consistent with conventional graph metrics, analysis of the Laplacian spectra revealed an integrative community structure in neural brain networks. Extending previous findings of overlap of network attributes across species, similarity of the Laplacian spectra across the cat, macaque and C. elegans neural networks suggests a certain level of consistency in the overall architecture of the anatomical neural networks of these species. Our results further suggest a specific network class for neural networks, distinct from conceptual small-world and scale-free models as well as several empirical networks. PMID:24454286

  18. Forecasting Energy Commodity Prices Using Neural Networks

    Directory of Open Access Journals (Sweden)

    Massimo Panella

    2012-01-01

    Full Text Available A new machine learning approach for price modeling is proposed. The use of neural networks as an advanced signal processing tool may be successfully used to model and forecast energy commodity prices, such as crude oil, coal, natural gas, and electricity prices. Energy commodities have shown explosive growth in the last decade. They have become a new asset class used also for investment purposes. This creates a huge demand for better modeling as what occurred in the stock markets in the 1970s. Their price behavior presents unique features causing complex dynamics whose prediction is regarded as a challenging task. The use of a Mixture of Gaussian neural network may provide significant improvements with respect to other well-known models. We propose a computationally efficient learning of this neural network using the maximum likelihood estimation approach to calibrate the parameters. The optimal model is identified using a hierarchical constructive procedure that progressively increases the model complexity. Extensive computer simulations validate the proposed approach and provide an accurate description of commodities prices dynamics.

  19. Hindcasting of storm waves using neural networks

    Digital Repository Service at National Institute of Oceanography (India)

    Rao, S.; Mandal, S.

    of any exogenous input requirement makes the network attractive. A neural network is an information processing system modeled on the structure of the human brain. Its merit is the ability to deal with fuzzy information whose interrelation is ambiguous...

  20. Neural network optimization, components, and design selection

    Science.gov (United States)

    Weller, Scott W.

    1991-01-01

    Neural Networks are part of a revived technology which has received a lot of hype in recent years. As is apt to happen in any hyped technology, jargon and predictions make its assimilation and application difficult. Nevertheless, Neural Networks have found use in a number of areas, working on non-trivial and non-contrived problems. For example, one net has been trained to "read", translating English text into phoneme sequences. Other applications of Neural Networks include data base manipulation and the solving of routing and classification types of optimization problems. It was their use in optimization that got me involved with Neural Networks. As it turned out, "optimization" used in this context was somewhat misleading, because while some network configurations could indeed solve certain kinds of optimization problems, the configuring or "training" of a Neural Network itself is an optimization problem, and most of the literature which talked about Neural Nets and optimization in the same breath did not speak to my goal of using Neural Nets to help solve lens optimization problems. I did eventually apply Neural Network to lens optimization, and I will touch on those results. The application of Neural Nets to the problem of lens selection was much more successful, and those results will dominate this paper.

  1. Flood estimation: a neural network approach

    Energy Technology Data Exchange (ETDEWEB)

    Swain, P.C.; Seshachalam, C.; Umamahesh, N.V. [Regional Engineering Coll., Warangal (India). Water and Environment Div.

    2000-07-01

    The artificial neural network (ANN) approach described in this study aims at predicting the flood flow into a reservoir. This differs from the traditional methods of flow prediction in the sense that it belongs to a class of data driven approaches, where as the traditional methods are model driven. Physical processes influencing the occurrences of streamflow in a river are highly complex, and are very difficult to be modelled by available statistical or deterministic models. ANNs provide model free solutions and hence can be expected to be appropriate in these conditions. Non-linearity, adaptivity, evidential response and fault tolerance are additional properties and capabilities of the neural networks. This paper highlights the applicability of neural networks for predicting daily flood flow taking the Hirakud reservoir on river Mahanadi in Orissa, India as the case study. The correlation between the observed and predicted flows and the relative error are considered to measure the performance of the model. The correlation between the observed and the modelled flows are computed to be 0.9467 in testing phase of the model. (orig.)

  2. Artificial neural network applications in ionospheric studies

    Directory of Open Access Journals (Sweden)

    L. R. Cander

    1998-06-01

    Full Text Available The ionosphere of Earth exhibits considerable spatial changes and has large temporal variability of various timescales related to the mechanisms of creation, decay and transport of space ionospheric plasma. Many techniques for modelling electron density profiles through entire ionosphere have been developed in order to solve the "age-old problem" of ionospheric physics which has not yet been fully solved. A new way to address this problem is by applying artificial intelligence methodologies to current large amounts of solar-terrestrial and ionospheric data. It is the aim of this paper to show by the most recent examples that modern development of numerical models for ionospheric monthly median long-term prediction and daily hourly short-term forecasting may proceed successfully applying the artificial neural networks. The performance of these techniques is illustrated with different artificial neural networks developed to model and predict the temporal and spatial variations of ionospheric critical frequency, f0F2 and Total Electron Content (TEC. Comparisons between results obtained by the proposed approaches and measured f0F2 and TEC data provide prospects for future applications of the artificial neural networks in ionospheric studies.

  3. Radiation Behavior of Analog Neural Network Chip

    Science.gov (United States)

    Langenbacher, H.; Zee, F.; Daud, T.; Thakoor, A.

    1996-01-01

    A neural network experiment conducted for the Space Technology Research Vehicle (STRV-1) 1-b launched in June 1994. Identical sets of analog feed-forward neural network chips was used to study and compare the effects of space and ground radiation on the chips. Three failure mechanisms are noted.

  4. Neural network approach to parton distributions fitting

    CERN Document Server

    Piccione, Andrea; Forte, Stefano; Latorre, Jose I.; Rojo, Joan; Piccione, Andrea; Rojo, Joan

    2006-01-01

    We will show an application of neural networks to extract information on the structure of hadrons. A Monte Carlo over experimental data is performed to correctly reproduce data errors and correlations. A neural network is then trained on each Monte Carlo replica via a genetic algorithm. Results on the proton and deuteron structure functions, and on the nonsinglet parton distribution will be shown.

  5. Self-organization of neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Clark, J.W.; Winston, J.V.; Rafelski, J.

    1984-05-14

    The plastic development of a neural-network model operating autonomously in discrete time is described by the temporal modification of interneuronal coupling strengths according to momentary neural activity. A simple algorithm (brainwashing) is found which, applied to nets with initially quasirandom connectivity, leads to model networks with properties conducive to the simulation of memory and learning phenomena. 18 references, 2 figures.

  6. Hidden neural networks: application to speech recognition

    DEFF Research Database (Denmark)

    Riis, Søren Kamaric

    1998-01-01

    We evaluate the hidden neural network HMM/NN hybrid on two speech recognition benchmark tasks; (1) task independent isolated word recognition on the Phonebook database, and (2) recognition of broad phoneme classes in continuous speech from the TIMIT database. It is shown how hidden neural networks...

  7. Genetic Algorithm Optimized Neural Networks Ensemble as ...

    African Journals Online (AJOL)

    Improvements in neural network calibration models by a novel approach using neural network ensemble (NNE) for the simultaneous spectrophotometric multicomponent analysis are suggested, with a study on the estimation of the components of an antihypertensive combination, namely, atenolol and losartan potassium.

  8. Neural Networks for Non-linear Control

    DEFF Research Database (Denmark)

    Sørensen, O.

    1994-01-01

    This paper describes how a neural network, structured as a Multi Layer Perceptron, is trained to predict, simulate and control a non-linear process.......This paper describes how a neural network, structured as a Multi Layer Perceptron, is trained to predict, simulate and control a non-linear process....

  9. Application of Neural Networks for Energy Reconstruction

    CERN Document Server

    Damgov, Jordan

    2002-01-01

    The possibility to use Neural Networks for reconstruction ofthe energy deposited in the calorimetry system of the CMS detector is investigated. It is shown that using feed-forward neural network, good linearity, Gaussian energy distribution and good energy resolution can be achieved. Significant improvement of the energy resolution and linearity is reached in comparison with other weighting methods for energy reconstruction.

  10. Neural Network to Solve Concave Games

    OpenAIRE

    Zixin Liu; Nengfa Wang

    2014-01-01

    The issue on neural network method to solve concave games is concerned. Combined with variational inequality, Ky Fan inequality, and projection equation, concave games are transformed into a neural network model. On the basis of the Lyapunov stable theory, some stability results are also given. Finally, two classic games’ simulation results are given to illustrate the theoretical results.

  11. Recognizing changing seasonal patterns using neural networks

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans); G. Draisma (Gerrit)

    1997-01-01

    textabstractIn this paper we propose a graphical method based on an artificial neural network model to investigate how and when seasonal patterns in macroeconomic time series change over time. Neural networks are useful since the hidden layer units may become activated only in certain seasons or

  12. Adaptive Neurons For Artificial Neural Networks

    Science.gov (United States)

    Tawel, Raoul

    1990-01-01

    Training time decreases dramatically. In improved mathematical model of neural-network processor, temperature of neurons (in addition to connection strengths, also called weights, of synapses) varied during supervised-learning phase of operation according to mathematical formalism and not heuristic rule. Evidence that biological neural networks also process information at neuronal level.

  13. Initialization of multilayer forecasting artifical neural networks

    OpenAIRE

    Bochkarev, Vladimir V.; Maslennikova, Yulia S.

    2014-01-01

    In this paper, a new method was developed for initialising artificial neural networks predicting dynamics of time series. Initial weighting coefficients were determined for neurons analogously to the case of a linear prediction filter. Moreover, to improve the accuracy of the initialization method for a multilayer neural network, some variants of decomposition of the transformation matrix corresponding to the linear prediction filter were suggested. The efficiency of the proposed neural netwo...

  14. International Conference on Artificial Neural Networks (ICANN)

    CERN Document Server

    Mladenov, Valeri; Kasabov, Nikola; Artificial Neural Networks : Methods and Applications in Bio-/Neuroinformatics

    2015-01-01

    The book reports on the latest theories on artificial neural networks, with a special emphasis on bio-neuroinformatics methods. It includes twenty-three papers selected from among the best contributions on bio-neuroinformatics-related issues, which were presented at the International Conference on Artificial Neural Networks, held in Sofia, Bulgaria, on September 10-13, 2013 (ICANN 2013). The book covers a broad range of topics concerning the theory and applications of artificial neural networks, including recurrent neural networks, super-Turing computation and reservoir computing, double-layer vector perceptrons, nonnegative matrix factorization, bio-inspired models of cell communities, Gestalt laws, embodied theory of language understanding, saccadic gaze shifts and memory formation, and new training algorithms for Deep Boltzmann Machines, as well as dynamic neural networks and kernel machines. It also reports on new approaches to reinforcement learning, optimal control of discrete time-delay systems, new al...

  15. Analog neural network-based helicopter gearbox health monitoring system.

    Science.gov (United States)

    Monsen, P T; Dzwonczyk, M; Manolakos, E S

    1995-12-01

    The development of a reliable helicopter gearbox health monitoring system (HMS) has been the subject of considerable research over the past 15 years. The deployment of such a system could lead to a significant saving in lives and vehicles as well as dramatically reduce the cost of helicopter maintenance. Recent research results indicate that a neural network-based system could provide a viable solution to the problem. This paper presents two neural network-based realizations of an HMS system. A hybrid (digital/analog) neural system is proposed as an extremely accurate off-line monitoring tool used to reduce helicopter gearbox maintenance costs. In addition, an all analog neural network is proposed as a real-time helicopter gearbox fault monitor that can exploit the ability of an analog neural network to directly compute the discrete Fourier transform (DFT) as a sum of weighted samples. Hardware performance results are obtained using the Integrated Neural Computing Architecture (INCA/1) analog neural network platform that was designed and developed at The Charles Stark Draper Laboratory. The results indicate that it is possible to achieve a 100% fault detection rate with 0% false alarm rate by performing a DFT directly on the first layer of INCA/1 followed by a small-size two-layer feed-forward neural network and a simple post-processing majority voting stage.

  16. Neural Based Orthogonal Data Fitting The EXIN Neural Networks

    CERN Document Server

    Cirrincione, Giansalvo

    2008-01-01

    Written by three leaders in the field of neural based algorithms, Neural Based Orthogonal Data Fitting proposes several neural networks, all endowed with a complete theory which not only explains their behavior, but also compares them with the existing neural and traditional algorithms. The algorithms are studied from different points of view, including: as a differential geometry problem, as a dynamic problem, as a stochastic problem, and as a numerical problem. All algorithms have also been analyzed on real time problems (large dimensional data matrices) and have shown accurate solutions. Wh

  17. Clustering: a neural network approach.

    Science.gov (United States)

    Du, K-L

    2010-01-01

    Clustering is a fundamental data analysis method. It is widely used for pattern recognition, feature extraction, vector quantization (VQ), image segmentation, function approximation, and data mining. As an unsupervised classification technique, clustering identifies some inherent structures present in a set of objects based on a similarity measure. Clustering methods can be based on statistical model identification (McLachlan & Basford, 1988) or competitive learning. In this paper, we give a comprehensive overview of competitive learning based clustering methods. Importance is attached to a number of competitive learning based clustering neural networks such as the self-organizing map (SOM), the learning vector quantization (LVQ), the neural gas, and the ART model, and clustering algorithms such as the C-means, mountain/subtractive clustering, and fuzzy C-means (FCM) algorithms. Associated topics such as the under-utilization problem, fuzzy clustering, robust clustering, clustering based on non-Euclidean distance measures, supervised clustering, hierarchical clustering as well as cluster validity are also described. Two examples are given to demonstrate the use of the clustering methods.

  18. Complex-valued Neural Networks

    Science.gov (United States)

    Hirose, Akira

    This paper reviews the features and applications of complex-valued neural networks (CVNNs). First we list the present application fields, and describe the advantages of the CVNNs in two application examples, namely, an adaptive plastic-landmine visualization system and an optical frequency-domain-multiplexed learning logic circuit. Then we briefly discuss the features of complex number itself to find that the phase rotation is the most significant concept, which is very useful in processing the information related to wave phenomena such as lightwave and electromagnetic wave. The CVNNs will also be an indispensable framework of the future microelectronic information-processing hardware where the quantum electron wave plays the principal role.

  19. Collision avoidance using neural networks

    Science.gov (United States)

    Sugathan, Shilpa; Sowmya Shree, B. V.; Warrier, Mithila R.; Vidhyapathi, C. M.

    2017-11-01

    Now a days, accidents on roads are caused due to the negligence of drivers and pedestrians or due to unexpected obstacles that come into the vehicle’s path. In this paper, a model (robot) is developed to assist drivers for a smooth travel without accidents. It reacts to the real time obstacles on the four critical sides of the vehicle and takes necessary action. The sensor used for detecting the obstacle was an IR proximity sensor. A single layer perceptron neural network is used to train and test all possible combinations of sensors result by using Matlab (offline). A microcontroller (ARM Cortex-M3 LPC1768) is used to control the vehicle through the output data which is received from Matlab via serial communication. Hence, the vehicle becomes capable of reacting to any combination of real time obstacles.

  20. ESTIMATION OF PV MODULE SURFACE TEMPERATURE USING ARTIFICIAL NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    Can Coskun

    2016-12-01

    Full Text Available This study aimed to use the artificial neural network (ANN method to estimate the surface temperature of a photovoltaic (PV panel. Using the experimentally obtained PV data, the accuracy of the ANN model was evaluated. To train the artificial neural network (ANN, outer temperature solar radiation and wind speed values were inputs and surface temperature was an output. The ANN was used to estimate PV panel surface temperature. Using the Levenberg-Marquardt (LM algorithm the feed forward artificial neural network was trained. Two back propagation type ANN algorithms were used and their performance was compared with the estimate from the LM algorithm. To train the artificial neural network, experimental data were used for two thirds with the remaining third used for testing. Additionally scaled conjugate gradient (SCG back propagation and resilient back propagation (RB type ANN algorithms were used for comparison with the LM algorithm. The performances of these three types of artificial neural network were compared and mean error rates of between 0.005962 and 0.012177% were obtained. The best estimate was produced by the LM algorithm. Estimation of PV surface temperature with artificial neural networks provides better results than conventional correlation methods. This study showed that artificial neural networks may be effectively used to estimate PV surface temperature.

  1. Granular neural networks, pattern recognition and bioinformatics

    CERN Document Server

    Pal, Sankar K; Ganivada, Avatharam

    2017-01-01

    This book provides a uniform framework describing how fuzzy rough granular neural network technologies can be formulated and used in building efficient pattern recognition and mining models. It also discusses the formation of granules in the notion of both fuzzy and rough sets. Judicious integration in forming fuzzy-rough information granules based on lower approximate regions enables the network to determine the exactness in class shape as well as to handle the uncertainties arising from overlapping regions, resulting in efficient and speedy learning with enhanced performance. Layered network and self-organizing analysis maps, which have a strong potential in big data, are considered as basic modules,. The book is structured according to the major phases of a pattern recognition system (e.g., classification, clustering, and feature selection) with a balanced mixture of theory, algorithm, and application. It covers the latest findings as well as directions for future research, particularly highlighting bioinf...

  2. Learning Processes of Layered Neural Networks

    OpenAIRE

    Fujiki, Sumiyoshi; FUJIKI, Nahomi, M.

    1995-01-01

    A positive reinforcement type learning algorithm is formulated for a stochastic feed-forward neural network, and a learning equation similar to that of the Boltzmann machine algorithm is obtained. By applying a mean field approximation to the same stochastic feed-forward neural network, a deterministic analog feed-forward network is obtained and the back-propagation learning rule is re-derived.

  3. Research of The Deeper Neural Networks

    Directory of Open Access Journals (Sweden)

    Xiao You Rong

    2016-01-01

    Full Text Available Neural networks (NNs have powerful computational abilities and could be used in a variety of applications; however, training these networks is still a difficult problem. With different network structures, many neural models have been constructed. In this report, a deeper neural networks (DNNs architecture is proposed. The training algorithm of deeper neural network insides searching the global optimal point in the actual error surface. Before the training algorithm is designed, the error surface of the deeper neural network is analyzed from simple to complicated, and the features of the error surface is obtained. Based on these characters, the initialization method and training algorithm of DNNs is designed. For the initialization, a block-uniform design method is proposed which separates the error surface into some blocks and finds the optimal block using the uniform design method. For the training algorithm, the improved gradient-descent method is proposed which adds a penalty term into the cost function of the old gradient descent method. This algorithm makes the network have a great approximating ability and keeps the network state stable. All of these improve the practicality of the neural network.

  4. Neural network topology design for nonlinear control

    Science.gov (United States)

    Haecker, Jens; Rudolph, Stephan

    2001-03-01

    Neural networks, especially in nonlinear system identification and control applications, are typically considered to be black-boxes which are difficult to analyze and understand mathematically. Due to this reason, an in- depth mathematical analysis offering insight into the different neural network transformation layers based on a theoretical transformation scheme is desired, but up to now neither available nor known. In previous works it has been shown how proven engineering methods such as dimensional analysis and the Laplace transform may be used to construct a neural controller topology for time-invariant systems. Using the knowledge of neural correspondences of these two classical methods, the internal nodes of the network could also be successfully interpreted after training. As further extension to these works, the paper describes the latest of a theoretical interpretation framework describing the neural network transformation sequences in nonlinear system identification and control. This can be achieved By incorporation of the method of exact input-output linearization in the above mentioned two transform sequences of dimensional analysis and the Laplace transformation. Based on these three theoretical considerations neural network topologies may be designed in special situations by pure translation in the sense of a structural compilation of the known classical solutions into their correspondent neural topology. Based on known exemplary results, the paper synthesizes the proposed approach into the visionary goals of a structural compiler for neural networks. This structural compiler for neural networks is intended to automatically convert classical control formulations into their equivalent neural network structure based on the principles of equivalence between formula and operator, and operator and structure which are discussed in detail in this work.

  5. Rule extraction from minimal neural networks for credit card screening.

    Science.gov (United States)

    Setiono, Rudy; Baesens, Bart; Mues, Christophe

    2011-08-01

    While feedforward neural networks have been widely accepted as effective tools for solving classification problems, the issue of finding the best network architecture remains unresolved, particularly so in real-world problem settings. We address this issue in the context of credit card screening, where it is important to not only find a neural network with good predictive performance but also one that facilitates a clear explanation of how it produces its predictions. We show that minimal neural networks with as few as one hidden unit provide good predictive accuracy, while having the added advantage of making it easier to generate concise and comprehensible classification rules for the user. To further reduce model size, a novel approach is suggested in which network connections from the input units to this hidden unit are removed by a very straightaway pruning procedure. In terms of predictive accuracy, both the minimized neural networks and the rule sets generated from them are shown to compare favorably with other neural network based classifiers. The rules generated from the minimized neural networks are concise and thus easier to validate in a real-life setting.

  6. Neural Networks and Their Application to Air Force Personnel Modeling

    Science.gov (United States)

    1991-11-01

    breadth of techniques provides fertile ground against which to compare the results obtained with neural networks. ", Most of the models in reenlistment or...Specialties (MOSs) receiving SRBs were taken from the 1980 and 1981 Enlisted Master Files ( EMFs ). These 98 MOSs were then aggregated into 15 Career Management... mechanisms , and architectures. Neural Networks, 1(1), 17-62. Hagiwara, M. (1990). Accelerated backpropagation using unlearning based on a Hebb rule

  7. Neural-Network Control Of Prosthetic And Robotic Hands

    Science.gov (United States)

    Buckley, Theresa M.

    1991-01-01

    Electronic neural networks proposed for use in controlling robotic and prosthetic hands and exoskeletal or glovelike electromechanical devices aiding intact but nonfunctional hands. Specific to patient, who activates grasping motion by voice command, by mechanical switch, or by myoelectric impulse. Patient retains higher-level control, while lower-level control provided by neural network analogous to that of miniature brain. During training, patient teaches miniature brain to perform specialized, anthropomorphic movements unique to himself or herself.

  8. Synchronization of fractional fuzzy cellular neural networks with interactions

    Science.gov (United States)

    Ma, Weiyuan; Li, Changpin; Wu, Yujiang; Wu, Yongqing

    2017-10-01

    In this paper, we introduce fuzzy theory into the fractional cellular neural networks to dynamically enhance the coupling strength and propose a fractional fuzzy neural network model with interactions. Using the Lyapunov principle of fractional differential equations, we design the adaptive control schemes to realize the synchronization and obtain the synchronization criteria. Finally, we provide some numerical examples to show the effectiveness of our obtained results.

  9. Convolutional neural networks and face recognition task

    Science.gov (United States)

    Sochenkova, A.; Sochenkov, I.; Makovetskii, A.; Vokhmintsev, A.; Melnikov, A.

    2017-09-01

    Computer vision tasks are remaining very important for the last couple of years. One of the most complicated problems in computer vision is face recognition that could be used in security systems to provide safety and to identify person among the others. There is a variety of different approaches to solve this task, but there is still no universal solution that would give adequate results in some cases. Current paper presents following approach. Firstly, we extract an area containing face, then we use Canny edge detector. On the next stage we use convolutional neural networks (CNN) to finally solve face recognition and person identification task.

  10. Genetic algorithm for neural networks optimization

    Science.gov (United States)

    Setyawati, Bina R.; Creese, Robert C.; Sahirman, Sidharta

    2004-11-01

    This paper examines the forecasting performance of multi-layer feed forward neural networks in modeling a particular foreign exchange rates, i.e. Japanese Yen/US Dollar. The effects of two learning methods, Back Propagation and Genetic Algorithm, in which the neural network topology and other parameters fixed, were investigated. The early results indicate that the application of this hybrid system seems to be well suited for the forecasting of foreign exchange rates. The Neural Networks and Genetic Algorithm were programmed using MATLAB«.

  11. Vectorized algorithms for spiking neural network simulation.

    Science.gov (United States)

    Brette, Romain; Goodman, Dan F M

    2011-06-01

    High-level languages (Matlab, Python) are popular in neuroscience because they are flexible and accelerate development. However, for simulating spiking neural networks, the cost of interpretation is a bottleneck. We describe a set of algorithms to simulate large spiking neural networks efficiently with high-level languages using vector-based operations. These algorithms constitute the core of Brian, a spiking neural network simulator written in the Python language. Vectorized simulation makes it possible to combine the flexibility of high-level languages with the computational efficiency usually associated with compiled languages.

  12. Convolutional Neural Network for Image Recognition

    CERN Document Server

    Seifnashri, Sahand

    2015-01-01

    The aim of this project is to use machine learning techniques especially Convolutional Neural Networks for image processing. These techniques can be used for Quark-Gluon discrimination using calorimeters data, but unfortunately I didn’t manage to get the calorimeters data and I just used the Jet data fromminiaodsim(ak4 chs). The Jet data was not good enough for Convolutional Neural Network which is designed for ’image’ recognition. This report is made of twomain part, part one is mainly about implementing Convolutional Neural Network on unphysical data such as MNIST digits and CIFAR-10 dataset and part 2 is about the Jet data.

  13. Neural Network and Letter Recognition.

    Science.gov (United States)

    Lee, Hue Yeon

    Neural net architectures and learning algorithms that recognize hand written 36 alphanumeric characters are studied. The thin line input patterns written in 32 x 32 binary array are used. The system is comprised of two major components, viz. a preprocessing unit and a Recognition unit. The preprocessing unit in turn consists of three layers of neurons; the U-layer, the V-layer, and the C -layer. The functions of the U-layer is to extract local features by template matching. The correlation between the detected local features are considered. Through correlating neurons in a plane with their neighboring neurons, the V-layer would thicken the on-cells or lines that are groups of on-cells of the previous layer. These two correlations would yield some deformation tolerance and some of the rotational tolerance of the system. The C-layer then compresses data through the 'Gabor' transform. Pattern dependent choice of center and wavelengths of 'Gabor' filters is the cause of shift and scale tolerance of the system. Three different learning schemes had been investigated in the recognition unit, namely; the error back propagation learning with hidden units, a simple perceptron learning, and a competitive learning. Their performances were analyzed and compared. Since sometimes the network fails to distinguish between two letters that are inherently similar, additional ambiguity resolving neural nets are introduced on top of the above main neural net. The two dimensional Fourier transform is used as the preprocessing and the perceptron is used as the recognition unit of the ambiguity resolver. One hundred different person's handwriting sets are collected. Some of these are used as the training sets and the remainders are used as the test sets. The correct recognition rate of the system increases with the number of training sets and eventually saturates at a certain value. Similar recognition rates are obtained for the above three different learning algorithms. The minimum error

  14. Neural Network for Estimating Conditional Distribution

    DEFF Research Database (Denmark)

    Schiøler, Henrik; Kulczycki, P.

    Neural networks for estimating conditional distributions and their associated quantiles are investigated in this paper. A basic network structure is developed on the basis of kernel estimation theory, and consistency is proved from a mild set of assumptions. A number of applications within...... statistcs, decision theory and signal processing are suggested, and a numerical example illustrating the capabilities of the elaborated network is given...

  15. Inverse kinematics problem in robotics using neural networks

    Science.gov (United States)

    Choi, Benjamin B.; Lawrence, Charles

    1992-01-01

    In this paper, Multilayer Feedforward Networks are applied to the robot inverse kinematic problem. The networks are trained with endeffector position and joint angles. After training, performance is measured by having the network generate joint angles for arbitrary endeffector trajectories. A 3-degree-of-freedom (DOF) spatial manipulator is used for the study. It is found that neural networks provide a simple and effective way to both model the manipulator inverse kinematics and circumvent the problems associated with algorithmic solution methods.

  16. Person Movement Prediction Using Neural Networks

    OpenAIRE

    Vintan, Lucian; Gellert, Arpad; Petzold, Jan; Ungerer, Theo

    2006-01-01

    Ubiquitous systems use context information to adapt appliance behavior to human needs. Even more convenience is reached if the appliance foresees the user's desires and acts proactively. This paper proposes neural prediction techniques to anticipate a person's next movement. We focus on neural predictors (multi-layer perceptron with back-propagation learning) with and without pre-training. The optimal configuration of the neural network is determined by evaluating movement sequences of real p...

  17. Deep Learning Neural Networks and Bayesian Neural Networks in Data Analysis

    Science.gov (United States)

    Chernoded, Andrey; Dudko, Lev; Myagkov, Igor; Volkov, Petr

    2017-10-01

    Most of the modern analyses in high energy physics use signal-versus-background classification techniques of machine learning methods and neural networks in particular. Deep learning neural network is the most promising modern technique to separate signal and background and now days can be widely and successfully implemented as a part of physical analysis. In this article we compare Deep learning and Bayesian neural networks application as a classifiers in an instance of top quark analysis.

  18. Deep Learning Neural Networks and Bayesian Neural Networks in Data Analysis

    Directory of Open Access Journals (Sweden)

    Chernoded Andrey

    2017-01-01

    Full Text Available Most of the modern analyses in high energy physics use signal-versus-background classification techniques of machine learning methods and neural networks in particular. Deep learning neural network is the most promising modern technique to separate signal and background and now days can be widely and successfully implemented as a part of physical analysis. In this article we compare Deep learning and Bayesian neural networks application as a classifiers in an instance of top quark analysis.

  19. Neural Network Machine Learning and Dimension Reduction for Data Visualization

    Science.gov (United States)

    Liles, Charles A.

    2014-01-01

    Neural network machine learning in computer science is a continuously developing field of study. Although neural network models have been developed which can accurately predict a numeric value or nominal classification, a general purpose method for constructing neural network architecture has yet to be developed. Computer scientists are often forced to rely on a trial-and-error process of developing and improving accurate neural network models. In many cases, models are constructed from a large number of input parameters. Understanding which input parameters have the greatest impact on the prediction of the model is often difficult to surmise, especially when the number of input variables is very high. This challenge is often labeled the "curse of dimensionality" in scientific fields. However, techniques exist for reducing the dimensionality of problems to just two dimensions. Once a problem's dimensions have been mapped to two dimensions, it can be easily plotted and understood by humans. The ability to visualize a multi-dimensional dataset can provide a means of identifying which input variables have the highest effect on determining a nominal or numeric output. Identifying these variables can provide a better means of training neural network models; models can be more easily and quickly trained using only input variables which appear to affect the outcome variable. The purpose of this project is to explore varying means of training neural networks and to utilize dimensional reduction for visualizing and understanding complex datasets.

  20. [Medical use of artificial neural networks].

    Science.gov (United States)

    Molnár, B; Papik, K; Schaefer, R; Dombóvári, Z; Fehér, J; Tulassay, Z

    1998-01-04

    The main aim of the research in medical diagnostics is to develop more exact, cost-effective and handsome systems, procedures and methods for supporting the clinicians. In their paper the authors introduce a new method that recently came into the focus referred to as artificial neural networks. Based on the literature of the past 5-6 years they give a brief review--highlighting the most important ones--showing the idea behind neural networks, what they are used for in the medical field. The definition, structure and operation of neural networks are discussed. In the application part they collect examples in order to give an insight in the neural network application research. It is emphasised that in the near future basically new diagnostic equipment can be developed based on this new technology in the field of ECG, EEG and macroscopic and microscopic image analysis systems.

  1. Application of neural networks in coastal engineering

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.

    methods. That is why it is becoming popular in various fields including coastal engineering. Waves and tides will play important roles in coastal erosion or accretion. This paper briefly describes the back-propagation neural networks and its application...

  2. Additive Feed Forward Control with Neural Networks

    DEFF Research Database (Denmark)

    Sørensen, O.

    1999-01-01

    This paper demonstrates a method to control a non-linear, multivariable, noisy process using trained neural networks. The basis for the method is a trained neural network controller acting as the inverse process model. A training method for obtaining such an inverse process model is applied....... A suitable 'shaped' (low-pass filtered) reference is used to overcome problems with excessive control action when using a controller acting as the inverse process model. The control concept is Additive Feed Forward Control, where the trained neural network controller, acting as the inverse process model......, is placed in a supplementary pure feed-forward path to an existing feedback controller. This concept benefits from the fact, that an existing, traditional designed, feedback controller can be retained without any modifications, and after training the connection of the neural network feed-forward controller...

  3. Blood glucose prediction using neural network

    Science.gov (United States)

    Soh, Chit Siang; Zhang, Xiqin; Chen, Jianhong; Raveendran, P.; Soh, Phey Hong; Yeo, Joon Hock

    2008-02-01

    We used neural network for blood glucose level determination in this study. The data set used in this study was collected using a non-invasive blood glucose monitoring system with six laser diodes, each laser diode operating at distinct near infrared wavelength between 1500nm and 1800nm. The neural network is specifically used to determine blood glucose level of one individual who participated in an oral glucose tolerance test (OGTT) session. Partial least squares regression is also used for blood glucose level determination for the purpose of comparison with the neural network model. The neural network model performs better in the prediction of blood glucose level as compared with the partial least squares model.

  4. PREDIKSI FOREX MENGGUNAKAN MODEL NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    R. Hadapiningradja Kusumodestoni

    2015-11-01

    Full Text Available ABSTRAK Prediksi adalah salah satu teknik yang paling penting dalam menjalankan bisnis forex. Keputusan dalam memprediksi adalah sangatlah penting, karena dengan prediksi dapat membantu mengetahui nilai forex di waktu tertentu kedepan sehingga dapat mengurangi resiko kerugian. Tujuan dari penelitian ini dimaksudkan memprediksi bisnis fores menggunakan model neural network dengan data time series per 1 menit untuk mengetahui nilai akurasi prediksi sehingga dapat mengurangi resiko dalam menjalankan bisnis forex. Metode penelitian pada penelitian ini meliputi metode pengumpulan data kemudian dilanjutkan ke metode training, learning, testing menggunakan neural network. Setelah di evaluasi hasil penelitian ini menunjukan bahwa penerapan algoritma Neural Network mampu untuk memprediksi forex dengan tingkat akurasi prediksi 0.431 +/- 0.096 sehingga dengan prediksi ini dapat membantu mengurangi resiko dalam menjalankan bisnis forex. Kata kunci: prediksi, forex, neural network.

  5. Using Neural Networks in Diagnosing Breast Cancer

    National Research Council Canada - National Science Library

    Fogel, David

    1997-01-01

    .... In the current study, evolutionary programming is used to train neural networks and linear discriminant models to detect breast cancer in suspicious and microcalcifications using radiographic features and patient age...

  6. Neural Networks in Mobile Robot Motion

    Directory of Open Access Journals (Sweden)

    Danica Janglová

    2004-03-01

    Full Text Available This paper deals with a path planning and intelligent control of an autonomous robot which should move safely in partially structured environment. This environment may involve any number of obstacles of arbitrary shape and size; some of them are allowed to move. We describe our approach to solving the motion-planning problem in mobile robot control using neural networks-based technique. Our method of the construction of a collision-free path for moving robot among obstacles is based on two neural networks. The first neural network is used to determine the “free” space using ultrasound range finder data. The second neural network “finds” a safe direction for the next robot section of the path in the workspace while avoiding the nearest obstacles. Simulation examples of generated path with proposed techniques will be presented.

  7. Isolated Speech Recognition Using Artificial Neural Networks

    National Research Council Canada - National Science Library

    Polur, Prasad

    2001-01-01

    .... A small size vocabulary containing the words YES and NO is chosen. Spectral features using cepstral analysis are extracted per frame and imported to a feedforward neural network which uses a backpropagation with momentum training algorithm...

  8. Control of autonomous robot using neural networks

    Science.gov (United States)

    Barton, Adam; Volna, Eva

    2017-07-01

    The aim of the article is to design a method of control of an autonomous robot using artificial neural networks. The introductory part describes control issues from the perspective of autonomous robot navigation and the current mobile robots controlled by neural networks. The core of the article is the design of the controlling neural network, and generation and filtration of the training set using ART1 (Adaptive Resonance Theory). The outcome of the practical part is an assembled Lego Mindstorms EV3 robot solving the problem of avoiding obstacles in space. To verify models of an autonomous robot behavior, a set of experiments was created as well as evaluation criteria. The speed of each motor was adjusted by the controlling neural network with respect to the situation in which the robot was found.

  9. Neural Networks in Mobile Robot Motion

    Directory of Open Access Journals (Sweden)

    Danica Janglova

    2008-11-01

    Full Text Available This paper deals with a path planning and intelligent control of an autonomous robot which should move safely in partially structured environment. This environment may involve any number of obstacles of arbitrary shape and size; some of them are allowed to move. We describe our approach to solving the motion-planning problem in mobile robot control using neural networks-based technique. Our method of the construction of a collision-free path for moving robot among obstacles is based on two neural networks. The first neural network is used to determine the "free" space using ultrasound range finder data. The second neural network "finds" a safe direction for the next robot section of the path in the workspace while avoiding the nearest obstacles. Simulation examples of generated path with proposed techniques will be presented.

  10. Constructive autoassociative neural network for facial recognition.

    Directory of Open Access Journals (Sweden)

    Bruno J T Fernandes

    Full Text Available Autoassociative artificial neural networks have been used in many different computer vision applications. However, it is difficult to define the most suitable neural network architecture because this definition is based on previous knowledge and depends on the problem domain. To address this problem, we propose a constructive autoassociative neural network called CANet (Constructive Autoassociative Neural Network. CANet integrates the concepts of receptive fields and autoassociative memory in a dynamic architecture that changes the configuration of the receptive fields by adding new neurons in the hidden layer, while a pruning algorithm removes neurons from the output layer. Neurons in the CANet output layer present lateral inhibitory connections that improve the recognition rate. Experiments in face recognition and facial expression recognition show that the CANet outperforms other methods presented in the literature.

  11. Genetic Algorithm Optimized Neural Networks Ensemble as ...

    African Journals Online (AJOL)

    NJD

    Genetic Algorithm Optimized Neural Networks Ensemble as. Calibration Model for Simultaneous Spectrophotometric. Estimation of Atenolol and Losartan Potassium in Tablets. Dondeti Satyanarayana*, Kamarajan Kannan and Rajappan Manavalan. Department of Pharmacy, Annamalai University, Annamalainagar, Tamil ...

  12. Applications of Pulse-Coupled Neural Networks

    CERN Document Server

    Ma, Yide; Wang, Zhaobin

    2011-01-01

    "Applications of Pulse-Coupled Neural Networks" explores the fields of image processing, including image filtering, image segmentation, image fusion, image coding, image retrieval, and biometric recognition, and the role of pulse-coupled neural networks in these fields. This book is intended for researchers and graduate students in artificial intelligence, pattern recognition, electronic engineering, and computer science. Prof. Yide Ma conducts research on intelligent information processing, biomedical image processing, and embedded system development at the School of Information Sci

  13. Advanced models of neural networks nonlinear dynamics and stochasticity in biological neurons

    CERN Document Server

    Rigatos, Gerasimos G

    2015-01-01

    This book provides a complete study on neural structures exhibiting nonlinear and stochastic dynamics, elaborating on neural dynamics by introducing advanced models of neural networks. It overviews the main findings in the modelling of neural dynamics in terms of electrical circuits and examines their stability properties with the use of dynamical systems theory. It is suitable for researchers and postgraduate students engaged with neural networks and dynamical systems theory.

  14. Diabetic retinopathy screening using deep neural network.

    Science.gov (United States)

    Ramachandran, Nishanthan; Hong, Sheng Chiong; Sime, Mary J; Wilson, Graham A

    2017-09-07

    There is a burgeoning interest in the use of deep neural network in diabetic retinal screening. To determine whether a deep neural network could satisfactorily detect diabetic retinopathy that requires referral to an ophthalmologist from a local diabetic retinal screening programme and an international database. Retrospective audit. Diabetic retinal photos from Otago database photographed during October 2016 (485 photos), and 1200 photos from Messidor international database. Receiver operating characteristic curve to illustrate the ability of a deep neural network to identify referable diabetic retinopathy (moderate or worse diabetic retinopathy or exudates within one disc diameter of the fovea). Area under the receiver operating characteristic curve, sensitivity and specificity. For detecting referable diabetic retinopathy, the deep neural network had an area under receiver operating characteristic curve of 0.901 (95% confidence interval 0.807-0.995), with 84.6% sensitivity and 79.7% specificity for Otago and 0.980 (95% confidence interval 0.973-0.986), with 96.0% sensitivity and 90.0% specificity for Messidor. This study has shown that a deep neural network can detect referable diabetic retinopathy with sensitivities and specificities close to or better than 80% from both an international and a domestic (New Zealand) database. We believe that deep neural networks can be integrated into community screening once they can successfully detect both diabetic retinopathy and diabetic macular oedema. © 2017 Royal Australian and New Zealand College of Ophthalmologists.

  15. Symbolic processing in neural networks

    OpenAIRE

    Neto, João Pedro; Hava T Siegelmann; Costa,J.Félix

    2003-01-01

    In this paper we show that programming languages can be translated into recurrent (analog, rational weighted) neural nets. Implementation of programming languages in neural nets turns to be not only theoretical exciting, but has also some practical implications in the recent efforts to merge symbolic and sub symbolic computation. To be of some use, it should be carried in a context of bounded resources. Herein, we show how to use resource bounds to speed up computations over neural nets, thro...

  16. Hindcasting cyclonic waves using neural networks

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Rao, S.; Chakravarty, N.V.

    the backpropagation networks with updated algorithms are used in this paper. A brief description about the working of a back propagation neural network and three updated algorithms is given below. Backpropagation learning: Backpropagation is the most widely used... algorithm for supervised learning with multi layer feed forward networks. The idea of the backpropagation learning algorithm is the repeated application of the chain rule to compute the influence of each weight in the network with respect to an arbitrary...

  17. Image Restoration Technology Based on Discrete Neural network

    Directory of Open Access Journals (Sweden)

    Zhou Duoying

    2015-01-01

    Full Text Available With the development of computer science and technology, the development of artificial intelligence advances rapidly in the field of image restoration. Based on the MATLAB platform, this paper constructs a kind of image restoration technology of artificial intelligence based on the discrete neural network and feedforward network, and carries out simulation and contrast of the restoration process by the use of the bionic algorithm. Through the application of simulation restoration technology, this paper verifies that the discrete neural network has a good convergence and identification capability in the image restoration technology with a better effect than that of the feedforward network. The restoration technology based on the discrete neural network can provide a reliable mathematical model for this field.

  18. The use of artificial neural networks in experimental data acquisition and aerodynamic design

    Science.gov (United States)

    Meade, Andrew J., Jr.

    1991-01-01

    It is proposed that an artificial neural network be used to construct an intelligent data acquisition system. The artificial neural networks (ANN) model has a potential for replacing traditional procedures as well as for use in computational fluid dynamics validation. Potential advantages of the ANN model are listed. As a proof of concept, the author modeled a NACA 0012 airfoil at specific conditions, using the neural network simulator NETS, developed by James Baffes of the NASA Johnson Space Center. The neural network predictions were compared to the actual data. It is concluded that artificial neural networks can provide an elegant and valuable class of mathematical tools for data analysis.

  19. Using fuzzy logic to integrate neural networks and knowledge-based systems

    Science.gov (United States)

    Yen, John

    1991-01-01

    Outlined here is a novel hybrid architecture that uses fuzzy logic to integrate neural networks and knowledge-based systems. The author's approach offers important synergistic benefits to neural nets, approximate reasoning, and symbolic processing. Fuzzy inference rules extend symbolic systems with approximate reasoning capabilities, which are used for integrating and interpreting the outputs of neural networks. The symbolic system captures meta-level information about neural networks and defines its interaction with neural networks through a set of control tasks. Fuzzy action rules provide a robust mechanism for recognizing the situations in which neural networks require certain control actions. The neural nets, on the other hand, offer flexible classification and adaptive learning capabilities, which are crucial for dynamic and noisy environments. By combining neural nets and symbolic systems at their system levels through the use of fuzzy logic, the author's approach alleviates current difficulties in reconciling differences between low-level data processing mechanisms of neural nets and artificial intelligence systems.

  20. Artificial astrocytes improve neural network performance.

    Science.gov (United States)

    Porto-Pazos, Ana B; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-04-19

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function.

  1. Maximum Entropy Approaches to Living Neural Networks

    Directory of Open Access Journals (Sweden)

    John M. Beggs

    2010-01-01

    Full Text Available Understanding how ensembles of neurons collectively interact will be a key step in developing a mechanistic theory of cognitive processes. Recent progress in multineuron recording and analysis techniques has generated tremendous excitement over the physiology of living neural networks. One of the key developments driving this interest is a new class of models based on the principle of maximum entropy. Maximum entropy models have been reported to account for spatial correlation structure in ensembles of neurons recorded from several different types of data. Importantly, these models require only information about the firing rates of individual neurons and their pairwise correlations. If this approach is generally applicable, it would drastically simplify the problem of understanding how neural networks behave. Given the interest in this method, several groups now have worked to extend maximum entropy models to account for temporal correlations. Here, we review how maximum entropy models have been applied to neuronal ensemble data to account for spatial and temporal correlations. We also discuss criticisms of the maximum entropy approach that argue that it is not generally applicable to larger ensembles of neurons. We conclude that future maximum entropy models will need to address three issues: temporal correlations, higher-order correlations, and larger ensemble sizes. Finally, we provide a brief list of topics for future research.

  2. Parametric Identification of Aircraft Loads: An Artificial Neural Network Approach

    Science.gov (United States)

    2016-03-30

    Undergraduate Student Paper Postgraduate Student Paper Parametric Identification of Aircraft Loads: An Artificial Neural Network Approach...monitoring, flight parameter, nonlinear modeling, Artificial Neural Network , typical loadcase. Introduction Aircraft load monitoring is an... Neural Networks (ANN), i.e. the BP network and Kohonen Clustering Network , are applied and revised by Kalman Filter and Genetic Algorithm to build

  3. Fin-and-tube condenser performance evaluation using neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Ling-Xiao [Institute of Refrigeration and Cryogenics, Shanghai Jiaotong University, Shanghai 200240 (China); Zhang, Chun-Lu [China R and D Center, Carrier Corporation, No. 3239 Shen Jiang Road, Shanghai 201206 (China)

    2010-05-15

    The paper presents neural network approach to performance evaluation of the fin-and-tube air-cooled condensers which are widely used in air-conditioning and refrigeration systems. Inputs of the neural network include refrigerant and air-flow rates, refrigerant inlet temperature and saturated temperature, and entering air dry-bulb temperature. Outputs of the neural network consist of the heating capacity and the pressure drops on both refrigerant and air sides. The multi-input multi-output (MIMO) neural network is separated into multi-input single-output (MISO) neural networks for training. Afterwards, the trained MISO neural networks are combined into a MIMO neural network, which indicates that the number of training data sets is determined by the biggest MISO neural network not the whole MIMO network. Compared with a validated first-principle model, the standard deviations of neural network models are less than 1.9%, and all errors fall into {+-}5%. (author)

  4. Prototype-Incorporated Emotional Neural Network.

    Science.gov (United States)

    Oyedotun, Oyebade K; Khashman, Adnan

    2017-08-15

    Artificial neural networks (ANNs) aim to simulate the biological neural activities. Interestingly, many ''engineering'' prospects in ANN have relied on motivations from cognition and psychology studies. So far, two important learning theories that have been subject of active research are the prototype and adaptive learning theories. The learning rules employed for ANNs can be related to adaptive learning theory, where several examples of the different classes in a task are supplied to the network for adjusting internal parameters. Conversely, the prototype-learning theory uses prototypes (representative examples); usually, one prototype per class of the different classes contained in the task. These prototypes are supplied for systematic matching with new examples so that class association can be achieved. In this paper, we propose and implement a novel neural network algorithm based on modifying the emotional neural network (EmNN) model to unify the prototype- and adaptive-learning theories. We refer to our new model as ``prototype-incorporated EmNN''. Furthermore, we apply the proposed model to two real-life challenging tasks, namely, static hand-gesture recognition and face recognition, and compare the result to those obtained using the popular back-propagation neural network (BPNN), emotional BPNN (EmNN), deep networks, an exemplar classification model, and k-nearest neighbor.

  5. On sparsely connected optimal neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Beiu, V. [Los Alamos National Lab., NM (United States); Draghici, S. [Wayne State Univ., Detroit, MI (United States)

    1997-10-01

    This paper uses two different approaches to show that VLSI- and size-optimal discrete neural networks are obtained for small fan-in values. These have applications to hardware implementations of neural networks, but also reveal an intrinsic limitation of digital VLSI technology: its inability to cope with highly connected structures. The first approach is based on implementing F{sub n,m} functions. The authors show that this class of functions can be implemented in VLSI-optimal (i.e., minimizing AT{sup 2}) neural networks of small constant fan-ins. In order to estimate the area (A) and the delay (T) of such networks, the following cost functions will be used: (i) the connectivity and the number-of-bits for representing the weights and thresholds--for good estimates of the area; and (ii) the fan-ins and the length of the wires--for good approximates of the delay. The second approach is based on implementing Boolean functions for which the classical Shannon`s decomposition can be used. Such a solution has already been used to prove bounds on the size of fan-in 2 neural networks. They will generalize the result presented there to arbitrary fan-in, and prove that the size is minimized by small fan-in values. Finally, a size-optimal neural network of small constant fan-ins will be suggested for F{sub n,m} functions.

  6. Artificial neural network intelligent method for prediction

    Science.gov (United States)

    Trifonov, Roumen; Yoshinov, Radoslav; Pavlova, Galya; Tsochev, Georgi

    2017-09-01

    Accounting and financial classification and prediction problems are high challenge and researchers use different methods to solve them. Methods and instruments for short time prediction of financial operations using artificial neural network are considered. The methods, used for prediction of financial data as well as the developed forecasting system with neural network are described in the paper. The architecture of a neural network used four different technical indicators, which are based on the raw data and the current day of the week is presented. The network developed is used for forecasting movement of stock prices one day ahead and consists of an input layer, one hidden layer and an output layer. The training method is algorithm with back propagation of the error. The main advantage of the developed system is self-determination of the optimal topology of neural network, due to which it becomes flexible and more precise The proposed system with neural network is universal and can be applied to various financial instruments using only basic technical indicators as input data.

  7. Estimating Conditional Distributions by Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1998-01-01

    Neural Networks for estimating conditionaldistributions and their associated quantiles are investigated in this paper. A basic network structure is developed on the basis of kernel estimation theory, and consistency property is considered from a mild set of assumptions. A number of applications...

  8. Artificial Neural Networks and Instructional Technology.

    Science.gov (United States)

    Carlson, Patricia A.

    1991-01-01

    Artificial neural networks (ANN), part of artificial intelligence, are discussed. Such networks are fed sample cases (training sets), learn how to recognize patterns in the sample data, and use this experience in handling new cases. Two cognitive roles for ANNs (intelligent filters and spreading, associative memories) are examined. Prototypes…

  9. Visual Servoing from Deep Neural Networks

    OpenAIRE

    Bateux, Quentin; Marchand, Eric; Leitner, Jürgen; Chaumette, Francois; Corke, Peter

    2017-01-01

    International audience; We present a deep neural network-based method to perform high-precision, robust and real-time 6 DOF visual servoing. The paper describes how to create a dataset simulating various perturbations (occlusions and lighting conditions) from a single real-world image of the scene. A convolutional neural network is fine-tuned using this dataset to estimate the relative pose between two images of the same scene. The output of the network is then employed in a visual servoing c...

  10. Design of Robust Neural Network Classifiers

    DEFF Research Database (Denmark)

    Larsen, Jan; Andersen, Lars Nonboe; Hintz-Madsen, Mads

    1998-01-01

    This paper addresses a new framework for designing robust neural network classifiers. The network is optimized using the maximum a posteriori technique, i.e., the cost function is the sum of the log-likelihood and a regularization term (prior). In order to perform robust classification, we present...... a modified likelihood function which incorporates the potential risk of outliers in the data. This leads to the introduction of a new parameter, the outlier probability. Designing the neural classifier involves optimization of network weights as well as outlier probability and regularization parameters. We...

  11. Electronic device aspects of neural network memories

    Science.gov (United States)

    Lambe, J.; Moopenn, A.; Thakoor, A. P.

    1985-01-01

    The basic issues related to the electronic implementation of the neural network model (NNM) for content addressable memories are examined. A brief introduction to the principles of the NNM is followed by an analysis of the information storage of the neural network in the form of a binary connection matrix and the recall capability of such matrix memories based on a hardware simulation study. In addition, materials and device architecture issues involved in the future realization of such networks in VLSI-compatible ultrahigh-density memories are considered. A possible space application of such devices would be in the area of large-scale information storage without mechanical devices.

  12. An overview on development of neural network technology

    Science.gov (United States)

    Lin, Chun-Shin

    1993-01-01

    The study has been to obtain a bird's-eye view of the current neural network technology and the neural network research activities in NASA. The purpose was two fold. One was to provide a reference document for NASA researchers who want to apply neural network techniques to solve their problems. Another one was to report out survey results regarding NASA research activities and provide a view on what NASA is doing, what potential difficulty exists and what NASA can/should do. In a ten week study period, we interviewed ten neural network researchers in the Langley Research Center and sent out 36 survey forms to researchers at the Johnson Space Center, Lewis Research Center, Ames Research Center and Jet Propulsion Laboratory. We also sent out 60 similar forms to educators and corporation researchers to collect general opinions regarding this field. Twenty-eight survey forms, 11 from NASA researchers and 17 from outside, were returned. Survey results were reported in our final report. In the final report, we first provided an overview on the neural network technology. We reviewed ten neural network structures, discussed the applications in five major areas, and compared the analog, digital and hybrid electronic implementation of neural networks. In the second part, we summarized known NASA neural network research studies and reported the results of the questionnaire survey. Survey results show that most studies are still in the development and feasibility study stage. We compared the techniques, application areas, researchers' opinions on this technology, and many aspects between NASA and non-NASA groups. We also summarized their opinions on difficulties encountered. Applications are considered the top research priority by most researchers. Hardware development and learning algorithm improvement are the next. The lack of financial and management support is among the difficulties in research study. All researchers agree that the use of neural networks could result in

  13. A quantum-implementable neural network model

    Science.gov (United States)

    Chen, Jialin; Wang, Lingli; Charbon, Edoardo

    2017-10-01

    A quantum-implementable neural network, namely quantum probability neural network (QPNN) model, is proposed in this paper. QPNN can use quantum parallelism to trace all possible network states to improve the result. Due to its unique quantum nature, this model is robust to several quantum noises under certain conditions, which can be efficiently implemented by the qubus quantum computer. Another advantage is that QPNN can be used as memory to retrieve the most relevant data and even to generate new data. The MATLAB experimental results of Iris data classification and MNIST handwriting recognition show that much less neuron resources are required in QPNN to obtain a good result than the classical feedforward neural network. The proposed QPNN model indicates that quantum effects are useful for real-life classification tasks.

  14. Neural network optimization, components, and design selection

    Science.gov (United States)

    Weller, Scott W.

    1990-07-01

    Neural Networks are part of a revived technology which has received a lot of hype in recent years. As is apt to happen in any hyped technology, jargon and predictions make its assimilation and application difficult. Nevertheless, Neural Networks have found use in a number of areas, working on non-trivial and noncontrived problems. For example, one net has been trained to "read", translating English text into phoneme sequences. Other applications of Neural Networks include data base manipulation and the solving of muting and classification types of optimization problems. Neural Networks are constructed from neurons, which in electronics or software attempt to model but are not constrained by the real thing, i.e., neurons in our gray matter. Neurons are simple processing units connected to many other neurons over pathways which modify the incoming signals. A single synthetic neuron typically sums its weighted inputs, runs this sum through a non-linear function, and produces an output. In the brain, neurons are connected in a complex topology: in hardware/software the topology is typically much simpler, with neurons lying side by side, forming layers of neurons which connect to the layer of neurons which receive their outputs. This simplistic model is much easier to construct than the real thing, and yet can solve real problems. The information in a network, or its "memory", is completely contained in the weights on the connections from one neuron to another. Establishing these weights is called "training" the network. Some networks are trained by design -- once constructed no further learning takes place. Other types of networks require iterative training once wired up, but are not trainable once taught Still other types of networks can continue to learn after initial construction. The main benefit to using Neural Networks is their ability to work with conflicting or incomplete ("fuzzy") data sets. This ability and its usefulness will become evident in the following

  15. Firing rate dynamics in recurrent spiking neural networks with intrinsic and network heterogeneity.

    Science.gov (United States)

    Ly, Cheng

    2015-12-01

    Heterogeneity of neural attributes has recently gained a lot of attention and is increasing recognized as a crucial feature in neural processing. Despite its importance, this physiological feature has traditionally been neglected in theoretical studies of cortical neural networks. Thus, there is still a lot unknown about the consequences of cellular and circuit heterogeneity in spiking neural networks. In particular, combining network or synaptic heterogeneity and intrinsic heterogeneity has yet to be considered systematically despite the fact that both are known to exist and likely have significant roles in neural network dynamics. In a canonical recurrent spiking neural network model, we study how these two forms of heterogeneity lead to different distributions of excitatory firing rates. To analytically characterize how these types of heterogeneities affect the network, we employ a dimension reduction method that relies on a combination of Monte Carlo simulations and probability density function equations. We find that the relationship between intrinsic and network heterogeneity has a strong effect on the overall level of heterogeneity of the firing rates. Specifically, this relationship can lead to amplification or attenuation of firing rate heterogeneity, and these effects depend on whether the recurrent network is firing asynchronously or rhythmically firing. These observations are captured with the aforementioned reduction method, and furthermore simpler analytic descriptions based on this dimension reduction method are developed. The final analytic descriptions provide compact and descriptive formulas for how the relationship between intrinsic and network heterogeneity determines the firing rate heterogeneity dynamics in various settings.

  16. Neutron spectrometry with artificial neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.; Rodriguez, J.M.; Mercado S, G.A. [Universidad Autonoma de Zacatecas, A.P. 336, 98000 Zacatecas (Mexico); Iniguez de la Torre Bayo, M.P. [Universidad de Valladolid, Valladolid (Spain); Barquero, R. [Hospital Universitario Rio Hortega, Valladolid (Spain); Arteaga A, T. [Envases de Zacatecas, S.A. de C.V., Zacatecas (Mexico)]. e-mail: rvega@cantera.reduaz.mx

    2005-07-01

    An artificial neural network has been designed to obtain the neutron spectra from the Bonner spheres spectrometer's count rates. The neural network was trained using 129 neutron spectra. These include isotopic neutron sources; reference and operational spectra from accelerators and nuclear reactors, spectra from mathematical functions as well as few energy groups and monoenergetic spectra. The spectra were transformed from lethargy to energy distribution and were re-bin ned to 31 energy groups using the MCNP 4C code. Re-binned spectra and UTA4 response matrix were used to calculate the expected count rates in Bonner spheres spectrometer. These count rates were used as input and the respective spectrum was used as output during neural network training. After training the network was tested with the Bonner spheres count rates produced by a set of neutron spectra. This set contains data used during network training as well as data not used. Training and testing was carried out in the Mat lab program. To verify the network unfolding performance the original and unfolded spectra were compared using the {chi}{sup 2}-test and the total fluence ratios. The use of Artificial Neural Networks to unfold neutron spectra in neutron spectrometry is an alternative procedure that overcomes the drawbacks associated in this ill-conditioned problem. (Author)

  17. Neutron spectrometry using artificial neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Vega-Carrillo, Hector Rene [Unidad Academica de Estudios Nucleares, Universidad Autonoma de Zacatecas, Apdo. Postal 336, 98000 Zacatecas, Zac. (Mexico)]|[Unidad Academica de Ing. Electrica, Universidad Autonoma de Zacatecas, Apdo. Postal 336, 98000 Zacatecas, Zac. (Mexico)]|[Unidad Academica de Matematicas, Universidad Autonoma de Zacatecas, Apdo. Postal 336, 98000 Zacatecas, Zac. (Mexico)]. E-mail: fermineutron@yahoo.com; Martin Hernandez-Davila, Victor [Unidad Academica de Estudios Nucleares, Universidad Autonoma de Zacatecas, Apdo. Postal 336, 98000 Zacatecas, Zac. (Mexico)]|[Unidad Academica de Ing. Electrica, Universidad Autonoma de Zacatecas, Apdo. Postal 336, 98000 Zacatecas, Zac. (Mexico); Manzanares-Acuna, Eduardo [Unidad Academica de Estudios Nucleares, Universidad Autonoma de Zacatecas, Apdo. Postal 336, 98000 Zacatecas, Zac. (Mexico); Mercado Sanchez, Gema A. [Unidad Academica de Matematicas, Universidad Autonoma de Zacatecas, Apdo. Postal 336, 98000 Zacatecas, Zac. (Mexico); Pilar Iniguez de la Torre, Maria [Depto. Fisica Teorica, Molecular y Nuclear, Universidad de Valladolid, Valladolid (Spain); Barquero, Raquel [Hospital Universitario Rio Hortega, Valladolid (Spain); Palacios, Francisco; Mendez Villafane, Roberto [Depto. Fisica Teorica, Molecular y Nuclear, Universidad de Valladolid, Valladolid (Spain)]|[Universidad Europea Miguel de Cervantes, C. Padre Julio Chevalier No. 2, 47012 Valladolid (Spain); Arteaga Arteaga, Tarcicio [Unidad Academica de Estudios Nucleares, Universidad Autonoma de Zacatecas, Apdo. Postal 336, 98000 Zacatecas, Zac. (Mexico)]|[Envases de Zacatecas, SA de CV, Parque Industrial de Calera de Victor Rosales, Zac. (Mexico); Manuel Ortiz Rodriguez, Jose [Unidad Academica de Estudios Nucleares, Universidad Autonoma de Zacatecas, Apdo. Postal 336, 98000 Zacatecas, Zac. (Mexico)]|[Unidad Academica de Ing. Electrica, Universidad Autonoma de Zacatecas, Apdo. Postal 336, 98000 Zacatecas, Zac. (Mexico)

    2006-04-15

    An artificial neural network has been designed to obtain neutron spectra from Bonner spheres spectrometer count rates. The neural network was trained using 129 neutron spectra. These include spectra from isotopic neutron sources; reference and operational spectra from accelerators and nuclear reactors, spectra based on mathematical functions as well as few energy groups and monoenergetic spectra. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. The re-binned spectra and the UTA4 response matrix were used to calculate the expected count rates in Bonner spheres spectrometer. These count rates were used as input and their respective spectra were used as output during the neural network training. After training, the network was tested with the Bonner spheres count rates produced by folding a set of neutron spectra with the response matrix. This set contains data used during network training as well as data not used. Training and testing was carried out using the Matlab{sup (R)} program. To verify the network unfolding performance, the original and unfolded spectra were compared using the root mean square error. The use of artificial neural networks to unfold neutron spectra in neutron spectrometry is an alternative procedure that overcomes the drawbacks associated with this ill-conditioned problem.

  18. Neural networks to formulate special fats

    Directory of Open Access Journals (Sweden)

    Garcia, R. K.

    2012-09-01

    Full Text Available Neural networks are a branch of artificial intelligence based on the structure and development of biological systems, having as its main characteristic the ability to learn and generalize knowledge. They are used for solving complex problems for which traditional computing systems have a low efficiency. To date, applications have been proposed for different sectors and activities. In the area of fats and oils, the use of neural networks has focused mainly on two issues: the detection of adulteration and the development of fatty products. The formulation of fats for specific uses is the classic case of a complex problem where an expert or group of experts defines the proportions of each base, which, when mixed, provide the specifications for the desired product. Some conventional computer systems are currently available to assist the experts; however, these systems have some shortcomings. This article describes in detail a system for formulating fatty products, shortenings or special fats, from three or more components by using neural networks (MIX. All stages of development, including design, construction, training, evaluation, and operation of the network will be outlined.

    Las redes neuronales son una rama de la inteligencia artificial basadas en la estructura y funcionamiento de sistemas biológicos, teniendo como principal característica la capacidad de aprender y generalizar conocimiento. Estas son utilizadas en la resolución de problemas complejos, en los cuales los sistemas computacionales tradicionales presentan una eficiencia baja. Hasta la fecha, han sido propuestas aplicaciones para los más diversos sectores y actividades. En el área de grasas y aceites, la utilización de redes neuronales se ha concentrado principalmente en dos asuntos: la detección de adulteraciones y la formulación de productos grasos. La formulación de grasas para uso específico es el caso clásico de problema complejo donde un experto o grupo de

  19. Antagonistic neural networks underlying differentiated leadership roles

    OpenAIRE

    Richard Eleftherios Boyatzis; Kylie eRochford; Anthony Ian Jack

    2014-01-01

    The emergence of two distinct leadership roles, the task leader and the socio-emotional leader, has been documented in the leadership literature since the 1950’s. Recent research in neuroscience suggests that the division between task oriented and socio-emotional oriented roles derives from a fundamental feature of our neurobiology: an antagonistic relationship between two large-scale cortical networks -- the Task Positive Network (TPN) and the Default Mode Network (DMN). Neural activity in ...

  20. Advances in neural networks computational and theoretical issues

    CERN Document Server

    Esposito, Anna; Morabito, Francesco

    2015-01-01

    This book collects research works that exploit neural networks and machine learning techniques from a multidisciplinary perspective. Subjects covered include theoretical, methodological and computational topics which are grouped together into chapters devoted to the discussion of novelties and innovations related to the field of Artificial Neural Networks as well as the use of neural networks for applications, pattern recognition, signal processing, and special topics such as the detection and recognition of multimodal emotional expressions and daily cognitive functions, and  bio-inspired memristor-based networksProviding insights into the latest research interest from a pool of international experts coming from different research fields, the volume becomes valuable to all those with any interest in a holistic approach to implement believable, autonomous, adaptive, and context-aware Information Communication Technologies.

  1. Community structure of complex networks based on continuous neural network

    Science.gov (United States)

    Dai, Ting-ting; Shan, Chang-ji; Dong, Yan-shou

    2017-09-01

    As a new subject, the research of complex networks has attracted the attention of researchers from different disciplines. Community structure is one of the key structures of complex networks, so it is a very important task to analyze the community structure of complex networks accurately. In this paper, we study the problem of extracting the community structure of complex networks, and propose a continuous neural network (CNN) algorithm. It is proved that for any given initial value, the continuous neural network algorithm converges to the eigenvector of the maximum eigenvalue of the network modularity matrix. Therefore, according to the stability of the evolution of the network symbol will be able to get two community structure.

  2. Flexible body control using neural networks

    Science.gov (United States)

    Mccullough, Claire L.

    1992-01-01

    Progress is reported on the control of Control Structures Interaction suitcase demonstrator (a flexible structure) using neural networks and fuzzy logic. It is concluded that while control by neural nets alone (i.e., allowing the net to design a controller with no human intervention) has yielded less than optimal results, the neural net trained to emulate the existing fuzzy logic controller does produce acceptible system responses for the initial conditions examined. Also, a neural net was found to be very successful in performing the emulation step necessary for the anticipatory fuzzy controller for the CSI suitcase demonstrator. The fuzzy neural hybrid, which exhibits good robustness and noise rejection properties, shows promise as a controller for practical flexible systems, and should be further evaluated.

  3. Identification and Position Control of Marine Helm using Artificial Neural Network Neural Network

    Directory of Open Access Journals (Sweden)

    Hui ZHU

    2008-02-01

    Full Text Available If nonlinearities such as saturation of the amplifier gain and motor torque, gear backlash, and shaft compliances- just to name a few - are considered in the position control system of marine helm, traditional control methods are no longer sufficient to be used to improve the performance of the system. In this paper an alternative approach to traditional control methods - a neural network reference controller - is proposed to establish an adaptive control of the position of the marine helm to achieve the controlled variable at the command position. This neural network controller comprises of two neural networks. One is the plant model network used to identify the nonlinear system and the other the controller network used to control the output to follow the reference model. The experimental results demonstrate that this adaptive neural network reference controller has much better control performance than is obtained with traditional controllers.

  4. Training Deep Spiking Neural Networks Using Backpropagation.

    Science.gov (United States)

    Lee, Jun Haeng; Delbruck, Tobi; Pfeiffer, Michael

    2016-01-01

    Deep spiking neural networks (SNNs) hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise. This enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membrane potentials. Compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statistics of spikes more precisely. We evaluate the proposed framework on artificially generated events from the original MNIST handwritten digit benchmark, and also on the N-MNIST benchmark recorded with an event-based dynamic vision sensor, in which the proposed method reduces the error rate by a factor of more than three compared to the best previous SNN, and also achieves a higher accuracy than a conventional convolutional neural network (CNN) trained and tested on the same data. We demonstrate in the context of the MNIST task that thanks to their event-driven operation, deep SNNs (both fully connected and convolutional) trained with our method achieve accuracy equivalent with conventional neural networks. In the N-MNIST example, equivalent accuracy is achieved with about five times fewer computational operations.

  5. Neural networks for sign language translation

    Science.gov (United States)

    Wilson, Beth J.; Anspach, Gretel

    1993-09-01

    A neural network is used to extract relevant features of sign language from video images of a person communicating in American Sign Language or Signed English. The key features are hand motion, hand location with respect to the body, and handshape. A modular hybrid design is under way to apply various techniques, including neural networks, in the development of a translation system that will facilitate communication between deaf and hearing people. One of the neural networks described here is used to classify video images of handshapes into their linguistic counterpart in American Sign Language. The video image is preprocessed to yield Fourier descriptors that encode the shape of the hand silhouette. These descriptors are then used as inputs to a neural network that classifies their shapes. The network is trained with various examples from different signers and is tested with new images from new signers. The results have shown that for coarse handshape classes, the network is invariant to the type of camera used to film the various signers and to the segmentation technique.

  6. Neural networks for data compression and invariant image recognition

    Science.gov (United States)

    Gardner, Sheldon

    1989-01-01

    An approach to invariant image recognition (I2R), based upon a model of biological vision in the mammalian visual system (MVS), is described. The complete I2R model incorporates several biologically inspired features: exponential mapping of retinal images, Gabor spatial filtering, and a neural network associative memory. In the I2R model, exponentially mapped retinal images are filtered by a hierarchical set of Gabor spatial filters (GSF) which provide compression of the information contained within a pixel-based image. A neural network associative memory (AM) is used to process the GSF coded images. We describe a 1-D shape function method for coding of scale and rotationally invariant shape information. This method reduces image shape information to a periodic waveform suitable for coding as an input vector to a neural network AM. The shape function method is suitable for near term applications on conventional computing architectures equipped with VLSI FFT chips to provide a rapid image search capability.

  7. Sign Language Recognition using Neural Networks

    Directory of Open Access Journals (Sweden)

    Sabaheta Djogic

    2014-11-01

    Full Text Available – Sign language plays a great role as communication media for people with hearing difficulties.In developed countries, systems are made for overcoming a problem in communication with deaf people. This encouraged us to develop a system for the Bosnian sign language since there is a need for such system. The work is done with the use of digital image processing methods providing a system that teaches a multilayer neural network using a back propagation algorithm. Images are processed by feature extraction methods, and by masking method the data set has been created. Training is done using cross validation method for better performance thus; an accuracy of 84% is achieved.

  8. 3-D components of a biological neural network visualized in computer generated imagery. II - Macular neural network organization

    Science.gov (United States)

    Ross, Muriel D.; Meyer, Glenn; Lam, Tony; Cutler, Lynn; Vaziri, Parshaw

    1990-01-01

    Computer-assisted reconstructions of small parts of the macular neural network show how the nerve terminals and receptive fields are organized in 3-dimensional space. This biological neural network is anatomically organized for parallel distributed processing of information. Processing appears to be more complex than in computer-based neural network, because spatiotemporal factors figure into synaptic weighting. Serial reconstruction data show anatomical arrangements which suggest that (1) assemblies of cells analyze and distribute information with inbuilt redundancy, to improve reliability; (2) feedforward/feedback loops provide the capacity for presynaptic modulation of output during processing; (3) constrained randomness in connectivities contributes to adaptability; and (4) local variations in network complexity permit differing analyses of incoming signals to take place simultaneously. The last inference suggests that there may be segregation of information flow to central stations subserving particular functions.

  9. Equivalence of Conventional and Modified Network of Generalized Neural Elements

    Directory of Open Access Journals (Sweden)

    E. V. Konovalov

    2016-01-01

    Full Text Available The article is devoted to the analysis of neural networks consisting of generalized neural elements. The first part of the article proposes a new neural network model — a modified network of generalized neural elements (MGNE-network. This network developes the model of generalized neural element, whose formal description contains some flaws. In the model of the MGNE-network these drawbacks are overcome. A neural network is introduced all at once, without preliminary description of the model of a single neural element and method of such elements interaction. The description of neural network mathematical model is simplified and makes it relatively easy to construct on its basis a simulation model to conduct numerical experiments. The model of the MGNE-network is universal, uniting properties of networks consisting of neurons-oscillators and neurons-detectors. In the second part of the article we prove the equivalence of the dynamics of the two considered neural networks: the network, consisting of classical generalized neural elements, and MGNE-network. We introduce the definition of equivalence in the functioning of the generalized neural element and the MGNE-network consisting of a single element. Then we introduce the definition of the equivalence of the dynamics of the two neural networks in general. It is determined the correlation of different parameters of the two considered neural network models. We discuss the issue of matching the initial conditions of the two considered neural network models. We prove the theorem about the equivalence of the dynamics of the two considered neural networks. This theorem allows us to apply all previously obtained results for the networks, consisting of classical generalized neural elements, to the MGNE-network.

  10. Neural networks and particle physics

    CERN Document Server

    Peterson, Carsten

    1993-01-01

    1. Introduction : Structure of the Central Nervous System Generics2. Feed-forward networks, Perceptions, Function approximators3. Self-organisation, Feature Maps4. Feed-back Networks, The Hopfield model, Optimization problems, Feed-back, Networks, Deformable templates, Graph bisection

  11. Qualitative analysis and control of complex neural networks with delays

    CERN Document Server

    Wang, Zhanshan; Zheng, Chengde

    2016-01-01

    This book focuses on the stability of the dynamical neural system, synchronization of the coupling neural system and their applications in automation control and electrical engineering. The redefined concept of stability, synchronization and consensus are adopted to provide a better explanation of the complex neural network. Researchers in the fields of dynamical systems, computer science, electrical engineering and mathematics will benefit from the discussions on complex systems. The book will also help readers to better understand the theory behind the control technique and its design.

  12. Neural Networks in Antennas and Microwaves: A Practical Approach

    Directory of Open Access Journals (Sweden)

    Z. Raida

    2001-12-01

    Full Text Available Neural networks are electronic systems which can be trained toremember behavior of a modeled structure in given operational points,and which can be used to approximate behavior of the structure out ofthe training points. These approximation abilities of neural nets aredemonstrated on modeling a frequency-selective surface, a microstriptransmission line and a microstrip dipole. Attention is turned to theaccuracy and to the efficiency of neural models. The association ofneural models and genetic algorithms, which can provide a global designtool, is discussed.

  13. Artificial neural network in cosmic landscape

    Science.gov (United States)

    Liu, Junyu

    2017-12-01

    In this paper we propose that artificial neural network, the basis of machine learning, is useful to generate the inflationary landscape from a cosmological point of view. Traditional numerical simulations of a global cosmic landscape typically need an exponential complexity when the number of fields is large. However, a basic application of artificial neural network could solve the problem based on the universal approximation theorem of the multilayer perceptron. A toy model in inflation with multiple light fields is investigated numerically as an example of such an application.

  14. Top tagging with deep neural networks [Vidyo

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Recent literature on deep neural networks for top tagging has focussed on image based techniques or multivariate approaches using high level jet substructure variables. Here, we take a sequential approach to this task by using anordered sequence of energy deposits as training inputs. Unlike previous approaches, this strategy does not result in a loss of information during pixelization or the calculation of high level features. We also propose new preprocessing methods that do not alter key physical quantities such as jet mass. We compare the performance of this approach to standard tagging techniques and present results evaluating the robustness of the neural network to pileup.

  15. Automatic identification of species with neural networks.

    Science.gov (United States)

    Hernández-Serna, Andrés; Jiménez-Segura, Luz Fernanda

    2014-01-01

    A new automatic identification system using photographic images has been designed to recognize fish, plant, and butterfly species from Europe and South America. The automatic classification system integrates multiple image processing tools to extract the geometry, morphology, and texture of the images. Artificial neural networks (ANNs) were used as the pattern recognition method. We tested a data set that included 740 species and 11,198 individuals. Our results show that the system performed with high accuracy, reaching 91.65% of true positive fish identifications, 92.87% of plants and 93.25% of butterflies. Our results highlight how the neural networks are complementary to species identification.

  16. Automatic identification of species with neural networks

    Directory of Open Access Journals (Sweden)

    Andrés Hernández-Serna

    2014-11-01

    Full Text Available A new automatic identification system using photographic images has been designed to recognize fish, plant, and butterfly species from Europe and South America. The automatic classification system integrates multiple image processing tools to extract the geometry, morphology, and texture of the images. Artificial neural networks (ANNs were used as the pattern recognition method. We tested a data set that included 740 species and 11,198 individuals. Our results show that the system performed with high accuracy, reaching 91.65% of true positive fish identifications, 92.87% of plants and 93.25% of butterflies. Our results highlight how the neural networks are complementary to species identification.

  17. Pulse image recognition using fuzzy neural network.

    Science.gov (United States)

    Xu, L S; Meng, Max Q -H; Wang, K Q

    2007-01-01

    The automatic recognition of pulse images is the key in the research of computerized pulse diagnosis. In order to automatically differentiate the pulse patterns by using small samples, a fuzzy neural network to classify pulse images based on the knowledge of experts in traditional Chinese pulse diagnosis was designed. The designed classifier can make hard decision and soft decision for identifying 18 patterns of pulse images at the accuracy of 91%, which is better than the results that achieved by back-propagation neural network.

  18. Assessing Landslide Hazard Using Artificial Neural Network

    DEFF Research Database (Denmark)

    Farrokhzad, Farzad; Choobbasti, Asskar Janalizadeh; Barari, Amin

    2011-01-01

    failure" which is main concentration of the current research and "liquefaction failure". Shear failures along shear planes occur when the shear stress along the sliding surfaces exceed the effective shear strength. These slides have been referred to as landslide. An expert system based on artificial...... neural network has been developed for use in the stability evaluation of slopes under various geological conditions and engineering requirements. The Artificial neural network model of this research uses slope characteristics as input and leads to the output in form of the probability of failure...

  19. Neural networks advances and applications 2

    CERN Document Server

    Gelenbe, E

    1992-01-01

    The present volume is a natural follow-up to Neural Networks: Advances and Applications which appeared one year previously. As the title indicates, it combines the presentation of recent methodological results concerning computational models and results inspired by neural networks, and of well-documented applications which illustrate the use of such models in the solution of difficult problems. The volume is balanced with respect to these two orientations: it contains six papers concerning methodological developments and five papers concerning applications and examples illustrating the theoret

  20. Human Face Recognition Using Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Răzvan-Daniel Albu

    2009-10-01

    Full Text Available In this paper, I present a novel hybrid face recognition approach based on a convolutional neural architecture, designed to robustly detect highly variable face patterns. The convolutional network extracts successively larger features in a hierarchical set of layers. With the weights of the trained neural networks there are created kernel windows used for feature extraction in a 3-stage algorithm. I present experimental results illustrating the efficiency of the proposed approach. I use a database of 796 images of 159 individuals from Reims University which contains quite a high degree of variability in expression, pose, and facial details.

  1. SAR ATR Based on Convolutional Neural Network

    Directory of Open Access Journals (Sweden)

    Tian Zhuangzhuang

    2016-06-01

    Full Text Available This study presents a new method of Synthetic Aperture Radar (SAR image target recognition based on a convolutional neural network. First, we introduce a class separability measure into the cost function to improve this network’s ability to distinguish between categories. Then, we extract SAR image features using the improved convolutional neural network and classify these features using a support vector machine. Experimental results using moving and stationary target acquisition and recognition SAR datasets prove the validity of this method.

  2. Artificial neural networks modeling gene-environment interaction

    Directory of Open Access Journals (Sweden)

    Günther Frauke

    2012-05-01

    Full Text Available Abstract Background Gene-environment interactions play an important role in the etiological pathway of complex diseases. An appropriate statistical method for handling a wide variety of complex situations involving interactions between variables is still lacking, especially when continuous variables are involved. The aim of this paper is to explore the ability of neural networks to model different structures of gene-environment interactions. A simulation study is set up to compare neural networks with standard logistic regression models. Eight different structures of gene-environment interactions are investigated. These structures are characterized by penetrance functions that are based on sigmoid functions or on combinations of linear and non-linear effects of a continuous environmental factor and a genetic factor with main effect or with a masking effect only. Results In our simulation study, neural networks are more successful in modeling gene-environment interactions than logistic regression models. This outperfomance is especially pronounced when modeling sigmoid penetrance functions, when distinguishing between linear and nonlinear components, and when modeling masking effects of the genetic factor. Conclusion Our study shows that neural networks are a promising approach for analyzing gene-environment interactions. Especially, if no prior knowledge of the correct nature of the relationship between co-variables and response variable is present, neural networks provide a valuable alternative to regression methods that are limited to the analysis of linearly separable data.

  3. Neural-network-based fuzzy logic decision systems

    Science.gov (United States)

    Kulkarni, Arun D.; Giridhar, G. B.; Coca, Praveen

    1994-10-01

    During the last few years there has been a large and energetic upswing in research efforts aimed at synthesizing fuzzy logic with neural networks. This combination of neural networks and fuzzy logic seems natural because the two approaches generally attack the design of `intelligent' system from quite different angles. Neural networks provide algorithms for learning, classification, and optimization whereas fuzzy logic often deals with issues such as reasoning in a high (semantic or linguistic) level. Consequently the two technologies complement each other. In this paper, we combine neural networks with fuzzy logic techniques. We propose an artificial neural network (ANN) model for a fuzzy logic decision system. The model consists of six layers. The first three layers map the input variables to fuzzy set membership functions. The last three layers implement the decision rules. The model learns the decision rules using a supervised gradient descent procedure. As an illustration we considered two examples. The first example deals with pixel classification in multispectral satellite images. In our second example we used the fuzzy decision system to analyze data from magnetic resonance imaging (MRI) scans for tissue classification.

  4. Exploiting network redundancy for low-cost neural network realizations.

    NARCIS (Netherlands)

    Keegstra, H; Jansen, WJ; Nijhuis, JAG; Spaanenburg, L; Stevens, H; Udding, JT

    1996-01-01

    A method is presented to optimize a trained neural network for physical realization styles. Target architectures are embedded microcontrollers or standard cell based ASIC designs. The approach exploits the redundancy in the network, required for successful training, to replace the synaptic weighting

  5. Early neural network development history: the age of Camelot.

    Science.gov (United States)

    Eberhart, R C; Dobbins, R W

    1990-01-01

    What the authors refer to as the first of four ages in the development of neural networks is discussed. It begins about a century ago with the American psychologist William James, and ends in 1969 with the publication of the book by M. Minsky and S. Papert on perceptrons. The history of this period is reviewed, focusing on people rather than just on theory or technology. The contributions of a number of individuals are discussed and related to how neural network tools are being implemented today. The selection of individuals discussed is somewhat arbitrary and not exhaustive, as the intent is to provide a broad sampling of people who contributed to current neural network technology. Besides James, the authors cover the work of W.C. McCulloch and W. Pitts (1943), D. Hebb (1949), and B. Widrow and M. Hoff (1960).

  6. Feature extraction for deep neural networks based on decision boundaries

    Science.gov (United States)

    Woo, Seongyoun; Lee, Chulhee

    2017-05-01

    Feature extraction is a process used to reduce data dimensions using various transforms while preserving the discriminant characteristics of the original data. Feature extraction has been an important issue in pattern recognition since it can reduce the computational complexity and provide a simplified classifier. In particular, linear feature extraction has been widely used. This method applies a linear transform to the original data to reduce the data dimensions. The decision boundary feature extraction method (DBFE) retains only informative directions for discriminating among the classes. DBFE has been applied to various parametric and non-parametric classifiers, which include the Gaussian maximum likelihood classifier (GML), the k-nearest neighbor classifier, support vector machines (SVM) and neural networks. In this paper, we apply DBFE to deep neural networks. This algorithm is based on the nonparametric version of DBFE, which was developed for neural networks. Experimental results with the UCI database show improved classification accuracy with reduced dimensionality.

  7. Analysis of convergence performance of neural networks ranking algorithm.

    Science.gov (United States)

    Zhang, Yongquan; Cao, Feilong

    2012-10-01

    The ranking problem is to learn a real-valued function which gives rise to a ranking over an instance space, which has gained much attention in machine learning in recent years. This article gives analysis of the convergence performance of neural networks ranking algorithm by means of the given samples and approximation property of neural networks. The upper bounds of convergence rate provided by our results can be considerably tight and independent of the dimension of input space when the target function satisfies some smooth condition. The obtained results imply that neural networks are able to adapt to ranking function in the instance space. Hence the obtained results are able to circumvent the curse of dimensionality on some smooth condition. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.

  8. Parameter Identification by Bayes Decision and Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1994-01-01

    The problem of parameter identification by Bayes point estimation using neural networks is investigated.......The problem of parameter identification by Bayes point estimation using neural networks is investigated....

  9. On The Comparison of Artificial Neural Network (ANN) and ...

    African Journals Online (AJOL)

    West African Journal of Industrial and Academic Research ... This work presented the results of an experimental comparison of two models: Multinomial Logistic Regression (MLR) and Artificial Neural Network (ANN) for ... Keywords: Multinomial Logistic Regression, Artificial Neural Network, Correct classification rate.

  10. A NEURAL OSCILLATOR-NETWORK MODEL OF TEMPORAL PATTERN GENERATION

    NARCIS (Netherlands)

    Schomaker, Lambert

    Most contemporary neural network models deal with essentially static, perceptual problems of classification and transformation. Models such as multi-layer feedforward perceptrons generally do not incorporate time as an essential dimension, whereas biological neural networks are inherently temporal

  11. Neural PID Control Strategy for Networked Process Control

    Directory of Open Access Journals (Sweden)

    Jianhua Zhang

    2013-01-01

    Full Text Available A new method with a two-layer hierarchy is presented based on a neural proportional-integral-derivative (PID iterative learning method over the communication network for the closed-loop automatic tuning of a PID controller. It can enhance the performance of the well-known simple PID feedback control loop in the local field when real networked process control applied to systems with uncertain factors, such as external disturbance or randomly delayed measurements. The proposed PID iterative learning method is implemented by backpropagation neural networks whose weights are updated via minimizing tracking error entropy of closed-loop systems. The convergence in the mean square sense is analysed for closed-loop networked control systems. To demonstrate the potential applications of the proposed strategies, a pressure-tank experiment is provided to show the usefulness and effectiveness of the proposed design method in network process control systems.

  12. Neural network design for J function approximation in dynamic programming

    CERN Document Server

    Pang, X

    1998-01-01

    This paper shows that a new type of artificial neural network (ANN) -- the Simultaneous Recurrent Network (SRN) -- can, if properly trained, solve a difficult function approximation problem which conventional ANNs -- either feedforward or Hebbian -- cannot. This problem, the problem of generalized maze navigation, is typical of problems which arise in building true intelligent control systems using neural networks. (Such systems are discussed in the chapter by Werbos in K.Pribram, Brain and Values, Erlbaum 1998.) The paper provides a general review of other types of recurrent networks and alternative training techniques, including a flowchart of the Error Critic training design, arguable the only plausible approach to explain how the brain adapts time-lagged recurrent systems in real-time. The C code of the test is appended. As in the first tests of backprop, the training here was slow, but there are ways to do better after more experience using this type of network.

  13. Type-2 fuzzy neural networks and their applications

    CERN Document Server

    Aliev, Rafik Aziz

    2014-01-01

    This book deals with the theory, design principles, and application of hybrid intelligent systems using type-2 fuzzy sets in combination with other paradigms of Soft Computing technology such as Neuro-Computing and Evolutionary Computing. It provides a self-contained exposition of the foundation of type-2 fuzzy neural networks and presents a vast compendium of its applications to control, forecasting, decision making, system identification and other real problems. Type-2 Fuzzy Neural Networks and Their Applications is helpful for teachers and students of universities and colleges, for scientis

  14. Inferring low-dimensional microstructure representations using convolutional neural networks

    Science.gov (United States)

    Lubbers, Nicholas; Lookman, Turab; Barros, Kipton

    2017-11-01

    We apply recent advances in machine learning and computer vision to a central problem in materials informatics: the statistical representation of microstructural images. We use activations in a pretrained convolutional neural network to provide a high-dimensional characterization of a set of synthetic microstructural images. Next, we use manifold learning to obtain a low-dimensional embedding of this statistical characterization. We show that the low-dimensional embedding extracts the parameters used to generate the images. According to a variety of metrics, the convolutional neural network method yields dramatically better embeddings than the analogous method derived from two-point correlations alone.

  15. Modelling of word usage frequency dynamics using artificial neural network

    Science.gov (United States)

    Maslennikova, Yu S.; Bochkarev, V. V.; Voloskov, D. S.

    2014-03-01

    In this paper the method for modelling of word usage frequency time series is proposed. An artificial feedforward neural network was used to predict word usage frequencies. The neural network was trained using the maximum likelihood criterion. The Google Books Ngram corpus was used for the analysis. This database provides a large amount of data on frequency of specific word forms for 7 languages. Statistical modelling of word usage frequency time series allows finding optimal fitting and filtering algorithm for subsequent lexicographic analysis and verification of frequency trend models.

  16. Neural networks of human nature and nurture

    Directory of Open Access Journals (Sweden)

    Daniel S. Levine

    2009-11-01

    Full Text Available Neural network methods have facilitated the unification of several unfortunate splits in psychology, including nature versus nurture. We review the contributions of this methodology and then discuss tentative network theories of caring behavior, of uncaring behavior, and of how the frontal lobes are involved in the choices between them. The implications of our theory are optimistic about the prospects of society to encourage the human potential for caring.

  17. Neural network for sonogram gap filling

    DEFF Research Database (Denmark)

    Klebæk, Henrik; Jensen, Jørgen Arendt; Hansen, Lars Kai

    1995-01-01

    a neural network for predicting mean frequency of the velocity signal and its variance. The neural network then predicts the evolution of the mean and variance in the gaps, and the sonogram and audio signal are reconstructed from these. The technique is applied on in-vivo data from the carotid artery...... in the sonogram and in the audio signal, rendering the audio signal useless, thus making diagnosis difficult. The current goal for ultrasound scanners is to maintain a high refresh rate for the B-mode image and at the same time attain a high maximum velocity in the sonogram display. This precludes the intermixing...... series, and is shown to yield better results, i.e., the variances of the predictions are lower. The ability of the neural predictor to reconstruct both the sonogram and the audio signal, when only 50% of the time is used for velocity data acquisition, is demonstrated for the in-vivo data...

  18. Digital Neural Networks for New Media

    Science.gov (United States)

    Spaanenburg, Lambert; Malki, Suleyman

    Neural Networks perform computationally intensive tasks offering smart solutions for many new media applications. A number of analog and mixed digital/analog implementations have been proposed to smooth the algorithmic gap. But gradually, the digital implementation has become feasible, and the dedicated neural processor is on the horizon. A notable example is the Cellular Neural Network (CNN). The analog direction has matured for low-power, smart vision sensors; the digital direction is gradually being shaped into an IP-core for algorithm acceleration, especially for use in FPGA-based high-performance systems. The chapter discusses the next step towards a flexible and scalable multi-core engine using Application-Specific Integrated Processors (ASIP). This topographic engine can serve many new media tasks, as illustrated by novel applications in Homeland Security. We conclude with a view on the CNN kaleidoscope for the year 2020.

  19. Optimizing neural network models: motivation and case studies

    OpenAIRE

    Harp, S A; T. Samad

    2012-01-01

    Practical successes have been achieved  with neural network models in a variety of domains, including energy-related industry. The large, complex design space presented by neural networks is only minimally explored in current practice. The satisfactory results that nevertheless have been obtained testify that neural networks are a robust modeling technology; at the same time, however, the lack of a systematic design approach implies that the best neural network models generally  rem...

  20. Dynamic Object Identification with SOM-based neural networks

    Directory of Open Access Journals (Sweden)

    Aleksey Averkin

    2014-03-01

    Full Text Available In this article a number of neural networks based on self-organizing maps, that can be successfully used for dynamic object identification, is described. Unique SOM-based modular neural networks with vector quantized associative memory and recurrent self-organizing maps as modules are presented. The structured algorithms of learning and operation of such SOM-based neural networks are described in details, also some experimental results and comparison with some other neural networks are given.

  1. Stock Price Prediction Based on Procedural Neural Networks

    OpenAIRE

    Jiuzhen Liang; Wei Song; Mei Wang

    2011-01-01

    We present a spatiotemporal model, namely, procedural neural networks for stock price prediction. Compared with some successful traditional models on simulating stock market, such as BNN (backpropagation neural networks, HMM (hidden Markov model) and SVM (support vector machine)), the procedural neural network model processes both spacial and temporal information synchronously without slide time window, which is typically used in the well-known recurrent neural networks. Two differen...

  2. Computational capabilities of graph neural networks.

    Science.gov (United States)

    Scarselli, Franco; Gori, Marco; Tsoi, Ah Chung; Hagenbuchner, Markus; Monfardini, Gabriele

    2009-01-01

    In this paper, we will consider the approximation properties of a recently introduced neural network model called graph neural network (GNN), which can be used to process-structured data inputs, e.g., acyclic graphs, cyclic graphs, and directed or undirected graphs. This class of neural networks implements a function tau(G,n) is an element of IR(m) that maps a graph G and one of its nodes n onto an m-dimensional Euclidean space. We characterize the functions that can be approximated by GNNs, in probability, up to any prescribed degree of precision. This set contains the maps that satisfy a property called preservation of the unfolding equivalence, and includes most of the practically useful functions on graphs; the only known exception is when the input graph contains particular patterns of symmetries when unfolding equivalence may not be preserved. The result can be considered an extension of the universal approximation property established for the classic feedforward neural networks (FNNs). Some experimental examples are used to show the computational capabilities of the proposed model.

  3. Parameter estimation using compensatory neural networks

    Indian Academy of Sciences (India)

    Proposed here is a new neuron model, a basis for Compensatory Neural Network Architecture (CNNA), which not only reduces the total number of interconnections among neurons but also reduces the total computing time for training. The suggested model has properties of the basic neuron model as well as the higher ...

  4. Based on BP Neural Network Stock Prediction

    Science.gov (United States)

    Liu, Xiangwei; Ma, Xin

    2012-01-01

    The stock market has a high profit and high risk features, on the stock market analysis and prediction research has been paid attention to by people. Stock price trend is a complex nonlinear function, so the price has certain predictability. This article mainly with improved BP neural network (BPNN) to set up the stock market prediction model, and…

  5. Epileptiform spike detection via convolutional neural networks

    DEFF Research Database (Denmark)

    Johansen, Alexander Rosenberg; Jin, Jing; Maszczyk, Tomasz

    2016-01-01

    The EEG of epileptic patients often contains sharp waveforms called "spikes", occurring between seizures. Detecting such spikes is crucial for diagnosing epilepsy. In this paper, we develop a convolutional neural network (CNN) for detecting spikes in EEG of epileptic patients in an automated...

  6. Artificial neural networks and support vector mac

    Indian Academy of Sciences (India)

    Quantitative structure-property relationships of electroluminescent materials: Artificial neural networks and support vector machines to predict electroluminescence of organic molecules. ALANA FERNANDES GOLIN and RICARDO STEFANI. ∗. Laboratório de Estudos de Materiais (LEMAT), Instituto de Ciências Exatas e da ...

  7. Neural Networks for protein Structure Prediction

    DEFF Research Database (Denmark)

    Bohr, Henrik

    1998-01-01

    This is a review about neural network applications in bioinformatics. Especially the applications to protein structure prediction, e.g. prediction of secondary structures, prediction of surface structure, fold class recognition and prediction of the 3-dimensional structure of protein backbones...

  8. Towards semen quality assessment using neural networks

    DEFF Research Database (Denmark)

    Linneberg, Christian; Salamon, P.; Svarer, C.

    1994-01-01

    The paper presents the methodology and results from a neural net based classification of human sperm head morphology. The methodology uses a preprocessing scheme in which invariant Fourier descriptors are lumped into “energy” bands. The resulting networks are pruned using optimal brain damage...

  9. Convolutional Neural Networks for SAR Image Segmentation

    DEFF Research Database (Denmark)

    Malmgren-Hansen, David; Nobel-Jørgensen, Morten

    2015-01-01

    Segmentation of Synthetic Aperture Radar (SAR) images has several uses, but it is a difficult task due to a number of properties related to SAR images. In this article we show how Convolutional Neural Networks (CNNs) can easily be trained for SAR image segmentation with good results. Besides...

  10. Convolutional Neural Networks - Generalizability and Interpretations

    DEFF Research Database (Denmark)

    Malmgren-Hansen, David

    from data despite it being limited in amount or context representation. Within Machine Learning this thesis focuses on Convolutional Neural Networks for Computer Vision. The research aims to answer how to explore a model's generalizability to the whole population of data samples and how to interpret...

  11. Visualization of neural networks using saliency maps

    DEFF Research Database (Denmark)

    Mørch, Niels J.S.; Kjems, Ulrik; Hansen, Lars Kai

    1995-01-01

    The saliency map is proposed as a new method for understanding and visualizing the nonlinearities embedded in feedforward neural networks, with emphasis on the ill-posed case, where the dimensionality of the input-field by far exceeds the number of examples. Several levels of approximations...

  12. Separable explanations of neural network decisions

    DEFF Research Database (Denmark)

    Rieger, Laura

    2017-01-01

    Deep Taylor Decomposition is a method used to explain neural network decisions. When applying this method to non-dominant classifications, the resulting explanation does not reflect important features for the chosen classification. We propose that this is caused by the dense layers and propose...

  13. Fast Fingerprint Classification with Deep Neural Network

    DEFF Research Database (Denmark)

    Michelsanti, Daniel; Guichi, Yanis; Ene, Andreea-Daniela

    2017-01-01

    . In this work we evaluate the performance of two pre-trained convolutional neural networks fine-tuned on the NIST SD4 benchmark database. The obtained results show that this approach is comparable with other results in the literature, with the advantage of a fast feature extraction stage....

  14. Empirical generalization assessment of neural network models

    DEFF Research Database (Denmark)

    Larsen, Jan; Hansen, Lars Kai

    1995-01-01

    This paper addresses the assessment of generalization performance of neural network models by use of empirical techniques. We suggest to use the cross-validation scheme combined with a resampling technique to obtain an estimate of the generalization performance distribution of a specific model...

  15. Localizing Tortoise Nests by Neural Networks.

    Directory of Open Access Journals (Sweden)

    Roberto Barbuti

    Full Text Available The goal of this research is to recognize the nest digging activity of tortoises using a device mounted atop the tortoise carapace. The device classifies tortoise movements in order to discriminate between nest digging, and non-digging activity (specifically walking and eating. Accelerometer data was collected from devices attached to the carapace of a number of tortoises during their two-month nesting period. Our system uses an accelerometer and an activity recognition system (ARS which is modularly structured using an artificial neural network and an output filter. For the purpose of experiment and comparison, and with the aim of minimizing the computational cost, the artificial neural network has been modelled according to three different architectures based on the input delay neural network (IDNN. We show that the ARS can achieve very high accuracy on segments of data sequences, with an extremely small neural network that can be embedded in programmable low power devices. Given that digging is typically a long activity (up to two hours, the application of ARS on data segments can be repeated over time to set up a reliable and efficient system, called Tortoise@, for digging activity recognition.

  16. Feature to prototype transition in neural networks

    Science.gov (United States)

    Krotov, Dmitry; Hopfield, John

    Models of associative memory with higher order (higher than quadratic) interactions, and their relationship to neural networks used in deep learning are discussed. Associative memory is conventionally described by recurrent neural networks with dynamical convergence to stable points. Deep learning typically uses feedforward neural nets without dynamics. However, a simple duality relates these two different views when applied to problems of pattern classification. From the perspective of associative memory such models deserve attention because they make it possible to store a much larger number of memories, compared to the quadratic case. In the dual description, these models correspond to feedforward neural networks with one hidden layer and unusual activation functions transmitting the activities of the visible neurons to the hidden layer. These activation functions are rectified polynomials of a higher degree rather than the rectified linear functions used in deep learning. The network learns representations of the data in terms of features for rectified linear functions, but as the power in the activation function is increased there is a gradual shift to a prototype-based representation, the two extreme regimes of pattern recognition known in cognitive psychology. Simons Center for Systems Biology.

  17. Applying Artificial Neural Networks for Face Recognition

    Directory of Open Access Journals (Sweden)

    Thai Hoang Le

    2011-01-01

    Full Text Available This paper introduces some novel models for all steps of a face recognition system. In the step of face detection, we propose a hybrid model combining AdaBoost and Artificial Neural Network (ABANN to solve the process efficiently. In the next step, labeled faces detected by ABANN will be aligned by Active Shape Model and Multi Layer Perceptron. In this alignment step, we propose a new 2D local texture model based on Multi Layer Perceptron. The classifier of the model significantly improves the accuracy and the robustness of local searching on faces with expression variation and ambiguous contours. In the feature extraction step, we describe a methodology for improving the efficiency by the association of two methods: geometric feature based method and Independent Component Analysis method. In the face matching step, we apply a model combining many Neural Networks for matching geometric features of human face. The model links many Neural Networks together, so we call it Multi Artificial Neural Network. MIT + CMU database is used for evaluating our proposed methods for face detection and alignment. Finally, the experimental results of all steps on CallTech database show the feasibility of our proposed model.

  18. drinking water treatment using artificial neural network

    African Journals Online (AJOL)

    ogwueleka

    synaptic weights are used to store the knowledge.” The neural network approach is a branch of artificial intelligence. The ANN is based on a model of the human neurological system that consists of basic computing elements (called neurons) interconnected together (Figure 1). The model used for all classification attempts.

  19. Artificial neural networks in neutron dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.; Mercado, G.A.; Perales M, W.A.; Robles R, J.A. [Unidades Academicas de Estudios Nucleares, UAZ, A.P. 336, 98000 Zacatecas (Mexico); Gallego, E.; Lorente, A. [Depto. de Ingenieria Nuclear, Universidad Politecnica de Madrid, (Spain)

    2005-07-01

    An artificial neural network has been designed to obtain the neutron doses using only the Bonner spheres spectrometer's count rates. Ambient, personal and effective neutron doses were included. 187 neutron spectra were utilized to calculate the Bonner count rates and the neutron doses. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. Re-binned spectra, UTA4 response matrix and fluence-to-dose coefficients were used to calculate the count rates in Bonner spheres spectrometer and the doses. Count rates were used as input and the respective doses were used as output during neural network training. Training and testing was carried out in Mat lab environment. The artificial neural network performance was evaluated using the {chi}{sup 2}- test, where the original and calculated doses were compared. The use of Artificial Neural Networks in neutron dosimetry is an alternative procedure that overcomes the drawbacks associated in this ill-conditioned problem. (Author)

  20. Learning chaotic attractors by neural networks

    NARCIS (Netherlands)

    Bakker, R; Schouten, JC; Giles, CL; Takens, F; van den Bleek, CM

    2000-01-01

    An algorithm is introduced that trains a neural network to identify chaotic dynamics from a single measured time series. During training, the algorithm learns to short-term predict the time series. At the same time a criterion, developed by Diks, van Zwet, Takens, and de Goede (1996) is monitored

  1. Nonlinear Time Series Analysis via Neural Networks

    Science.gov (United States)

    Volná, Eva; Janošek, Michal; Kocian, Václav; Kotyrba, Martin

    This article deals with a time series analysis based on neural networks in order to make an effective forex market [Moore and Roche, J. Int. Econ. 58, 387-411 (2002)] pattern recognition. Our goal is to find and recognize important patterns which repeatedly appear in the market history to adapt our trading system behaviour based on them.

  2. Neural networks, penalty logic and optimality theory

    NARCIS (Netherlands)

    Blutner, R.; Benz, A.; Blutner, R.

    2009-01-01

    Ever since the discovery of neural networks, there has been a controversy between two modes of information processing. On the one hand, symbolic systems have proven indispensable for our understanding of higher intelligence, especially when cognitive domains like language and reasoning are examined.

  3. Image inpainting using a neural network

    Directory of Open Access Journals (Sweden)

    Gapon Nikolay

    2017-01-01

    Full Text Available The paper describes a new method of two-dimensional signals reconstruction by restoring static images. A new method of spatial reconstruction of static images based on a geometric model using a neural network is proposed, it is based on the search for similar blocks and copying them into the region of distorted or missing pixel values.

  4. Nano-topography Enhances Communication in Neural Cells Networks

    KAUST Repository

    Onesto, V.

    2017-08-23

    Neural cells are the smallest building blocks of the central and peripheral nervous systems. Information in neural networks and cell-substrate interactions have been heretofore studied separately. Understanding whether surface nano-topography can direct nerve cells assembly into computational efficient networks may provide new tools and criteria for tissue engineering and regenerative medicine. In this work, we used information theory approaches and functional multi calcium imaging (fMCI) techniques to examine how information flows in neural networks cultured on surfaces with controlled topography. We found that substrate roughness Sa affects networks topology. In the low nano-meter range, S-a = 0-30 nm, information increases with Sa. Moreover, we found that energy density of a network of cells correlates to the topology of that network. This reinforces the view that information, energy and surface nano-topography are tightly inter-connected and should not be neglected when studying cell-cell interaction in neural tissue repair and regeneration.

  5. Distorted Character Recognition Via An Associative Neural Network

    Science.gov (United States)

    Messner, Richard A.; Szu, Harold H.

    1987-03-01

    The purpose of this paper is two-fold. First, it is intended to provide some preliminary results of a character recognition scheme which has foundations in on-going neural network architecture modeling, and secondly, to apply some of the neural network results in a real application area where thirty years of effort has had little effect on providing the machine an ability to recognize distorted objects within the same object class. It is the author's belief that the time is ripe to start applying in ernest the results of over twenty years of effort in neural modeling to some of the more difficult problems which seem so hard to solve by conventional means. The character recognition scheme proposed utilizes a preprocessing stage which performs a 2-dimensional Walsh transform of an input cartesian image field, then sequency filters this spectrum into three feature bands. Various features are then extracted and organized into three sets of feature vectors. These vector patterns that are stored and recalled associatively. Two possible associative neural memory models are proposed for further investigation. The first being an outer-product linear matrix associative memory with a threshold function controlling the strength of the output pattern (similar to Kohonen's crosscorrelation approach [1]). The second approach is based upon a modified version of Grossberg's neural architecture [2] which provides better self-organizing properties due to its adaptive nature. Preliminary results of the sequency filtering and feature extraction preprocessing stage and discussion about the use of the proposed neural architectures is included.

  6. The role of symmetry in neural networks and their Laplacian spectra.

    Science.gov (United States)

    de Lange, Siemon C; van den Heuvel, Martijn P; de Reus, Marcel A

    2016-11-01

    Human and animal nervous systems constitute complexly wired networks that form the infrastructure for neural processing and integration of information. The organization of these neural networks can be analyzed using the so-called Laplacian spectrum, providing a mathematical tool to produce systems-level network fingerprints. In this article, we examine a characteristic central peak in the spectrum of neural networks, including anatomical brain network maps of the mouse, cat and macaque, as well as anatomical and functional network maps of human brain connectivity. We link the occurrence of this central peak to the level of symmetry in neural networks, an intriguing aspect of network organization resulting from network elements that exhibit similar wiring patterns. Specifically, we propose a measure to capture the global level of symmetry of a network and show that, for both empirical networks and network models, the height of the main peak in the Laplacian spectrum is strongly related to node symmetry in the underlying network. Moreover, examination of spectra of duplication-based model networks shows that neural spectra are best approximated using a trade-off between duplication and diversification. Taken together, our results facilitate a better understanding of neural network spectra and the importance of symmetry in neural networks. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. MBVCNN: Joint convolutional neural networks method for image recognition

    Science.gov (United States)

    Tong, Tong; Mu, Xiaodong; Zhang, Li; Yi, Zhaoxiang; Hu, Pei

    2017-05-01

    Aiming at the problem of objects in image recognition rectangle, but objects which are input into convolutional neural networks square, the object recognition model was put forward which was based on BING method to realize object estimate, used vectorization of convolutional neural networks to realize input square image in convolutional networks, therefore, built joint convolution neural networks, which achieve multiple size image input. Verified by experiments, the accuracy of multi-object image recognition was improved by 6.70% compared with single vectorization of convolutional neural networks. Therefore, image recognition method of joint convolutional neural networks can enhance the accuracy in image recognition, especially for target in rectangular shape.

  8. On the use of a pruning prior for neural networks

    DEFF Research Database (Denmark)

    Goutte, Cyril

    1996-01-01

    We address the problem of using a regularization prior that prunes unnecessary weights in a neural network architecture. This prior provides a convenient alternative to traditional weight-decay. Two examples are studied to support this method and illustrate its use. First we use the sunspots...

  9. Artificial Neural Networks for Modeling Knowing and Learning in Science.

    Science.gov (United States)

    Roth, Wolff-Michael

    2000-01-01

    Advocates artificial neural networks as models for cognition and development. Provides an example of how such models work in the context of a well-known Piagetian developmental task and school science activity: balance beam problems. (Contains 59 references.) (Author/WRM)

  10. Mathematically Reduced Chemical Reaction Mechanism Using Neural Networks

    Energy Technology Data Exchange (ETDEWEB)

    Ziaul Huque

    2007-08-31

    This is the final technical report for the project titled 'Mathematically Reduced Chemical Reaction Mechanism Using Neural Networks'. The aim of the project was to develop an efficient chemistry model for combustion simulations. The reduced chemistry model was developed mathematically without the need of having extensive knowledge of the chemistry involved. To aid in the development of the model, Neural Networks (NN) was used via a new network topology known as Non-linear Principal Components Analysis (NPCA). A commonly used Multilayer Perceptron Neural Network (MLP-NN) was modified to implement NPCA-NN. The training rate of NPCA-NN was improved with the GEneralized Regression Neural Network (GRNN) based on kernel smoothing techniques. Kernel smoothing provides a simple way of finding structure in data set without the imposition of a parametric model. The trajectory data of the reaction mechanism was generated based on the optimization techniques of genetic algorithm (GA). The NPCA-NN algorithm was then used for the reduction of Dimethyl Ether (DME) mechanism. DME is a recently discovered fuel made from natural gas, (and other feedstock such as coal, biomass, and urban wastes) which can be used in compression ignition engines as a substitute for diesel. An in-house two-dimensional Computational Fluid Dynamics (CFD) code was developed based on Meshfree technique and time marching solution algorithm. The project also provided valuable research experience to two graduate students.

  11. Using neural networks for prediction of nuclear parameters

    Energy Technology Data Exchange (ETDEWEB)

    Pereira Filho, Leonidas; Souto, Kelling Cabral, E-mail: leonidasmilenium@hotmail.com, E-mail: kcsouto@bol.com.br [Instituto Federal de Educacao, Ciencia e Tecnologia do Rio de Janeiro (IFRJ), Rio de Janeiro, RJ (Brazil); Machado, Marcelo Dornellas, E-mail: dornemd@eletronuclear.gov.br [Eletrobras Termonuclear S.A. (GCN.T/ELETRONUCLEAR), Rio de Janeiro, RJ (Brazil). Gerencia de Combustivel Nuclear

    2013-07-01

    Dating from 1943, the earliest work on artificial neural networks (ANN), when Warren Mc Cullock and Walter Pitts developed a study on the behavior of the biological neuron, with the goal of creating a mathematical model. Some other work was done until after the 80 witnessed an explosion of interest in ANNs, mainly due to advances in technology, especially microelectronics. Because ANNs are able to solve many problems such as approximation, classification, categorization, prediction and others, they have numerous applications in various areas, including nuclear. Nodal method is adopted as a tool for analyzing core parameters such as boron concentration and pin power peaks for pressurized water reactors. However, this method is extremely slow when it is necessary to perform various core evaluations, for example core reloading optimization. To overcome this difficulty, in this paper a model of Multi-layer Perceptron (MLP) artificial neural network type backpropagation will be trained to predict these values. The main objective of this work is the development of Multi-layer Perceptron (MLP) artificial neural network capable to predict, in very short time, with good accuracy, two important parameters used in the core reloading problem - Boron Concentration and Power Peaking Factor. For the training of the neural networks are provided loading patterns and nuclear data used in cycle 19 of Angra 1 nuclear power plant. Three models of networks are constructed using the same input data and providing the following outputs: 1- Boron Concentration and Power Peaking Factor, 2 - Boron Concentration and 3 - Power Peaking Factor. (author)

  12. Radial basis function (RBF) neural network control for mechanical systems design, analysis and Matlab simulation

    CERN Document Server

    Liu, Jinkun

    2013-01-01

    Radial Basis Function (RBF) Neural Network Control for Mechanical Systems is motivated by the need for systematic design approaches to stable adaptive control system design using neural network approximation-based techniques. The main objectives of the book are to introduce the concrete design methods and MATLAB simulation of stable adaptive RBF neural control strategies. In this book, a broad range of implementable neural network control design methods for mechanical systems are presented, such as robot manipulators, inverted pendulums, single link flexible joint robots, motors, etc. Advanced neural network controller design methods and their stability analysis are explored. The book provides readers with the fundamentals of neural network control system design.   This book is intended for the researchers in the fields of neural adaptive control, mechanical systems, Matlab simulation, engineering design, robotics and automation. Jinkun Liu is a professor at Beijing University of Aeronautics and Astronauti...

  13. Bayesian Recurrent Neural Network for Language Modeling.

    Science.gov (United States)

    Chien, Jen-Tzung; Ku, Yuan-Chu

    2016-02-01

    A language model (LM) is calculated as the probability of a word sequence that provides the solution to word prediction for a variety of information systems. A recurrent neural network (RNN) is powerful to learn the large-span dynamics of a word sequence in the continuous space. However, the training of the RNN-LM is an ill-posed problem because of too many parameters from a large dictionary size and a high-dimensional hidden layer. This paper presents a Bayesian approach to regularize the RNN-LM and apply it for continuous speech recognition. We aim to penalize the too complicated RNN-LM by compensating for the uncertainty of the estimated model parameters, which is represented by a Gaussian prior. The objective function in a Bayesian classification network is formed as the regularized cross-entropy error function. The regularized model is constructed not only by calculating the regularized parameters according to the maximum a posteriori criterion but also by estimating the Gaussian hyperparameter by maximizing the marginal likelihood. A rapid approximation to a Hessian matrix is developed to implement the Bayesian RNN-LM (BRNN-LM) by selecting a small set of salient outer-products. The proposed BRNN-LM achieves a sparser model than the RNN-LM. Experiments on different corpora show the robustness of system performance by applying the rapid BRNN-LM under different conditions.

  14. Prediction of surface distress using neural networks

    Science.gov (United States)

    Hamdi, Hadiwardoyo, Sigit P.; Correia, A. Gomes; Pereira, Paulo; Cortez, Paulo

    2017-06-01

    Road infrastructures contribute to a healthy economy throughout a sustainable distribution of goods and services. A road network requires appropriately programmed maintenance treatments in order to keep roads assets in good condition, providing maximum safety for road users under a cost-effective approach. Surface Distress is the key element to identify road condition and may be generated by many different factors. In this paper, a new approach is aimed to predict Surface Distress Index (SDI) values following a data-driven approach. Later this model will be accordingly applied by using data obtained from the Integrated Road Management System (IRMS) database. Artificial Neural Networks (ANNs) are used to predict SDI index using input variables related to the surface of distress, i.e., crack area and width, pothole, rutting, patching and depression. The achieved results show that ANN is able to predict SDI with high correlation factor (R2 = 0.996%). Moreover, a sensitivity analysis was applied to the ANN model, revealing the influence of the most relevant input parameters for SDI prediction, namely rutting (59.8%), crack width (29.9%) and crack area (5.0%), patching (3.0%), pothole (1.7%) and depression (0.3%).

  15. Analysis of neural networks in terms of domain functions

    NARCIS (Netherlands)

    van der Zwaag, B.J.; Slump, Cornelis H.; Spaanenburg, Lambert

    Despite their success-story, artificial neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a "magic tool" but possibly even more as a

  16. Extracting knowledge from supervised neural networks in image processing

    NARCIS (Netherlands)

    van der Zwaag, B.J.; Slump, Cornelis H.; Spaanenburg, Lambert; Jain, R.; Abraham, A.; Faucher, C.; van der Zwaag, B.J.

    Despite their success-story, artificial neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a “magic tool��? but possibly even more as a

  17. neural network based load frequency control for restructuring power

    African Journals Online (AJOL)

    2012-03-01

    Mar 1, 2012 ... Abstract. In this study, an artificial neural network (ANN) application of load frequency control. (LFC) of a Multi-Area power system by using a neural network controller is presented. The comparison between a conventional Proportional Integral (PI) controller and the proposed artificial neural networks ...

  18. Artificial Neural Network Modeling of an Inverse Fluidized Bed ...

    African Journals Online (AJOL)

    The application of neural networks to model a laboratory scale inverse fluidized bed reactor has been studied. A Radial Basis Function neural network has been successfully employed for the modeling of the inverse fluidized bed reactor. In the proposed model, the trained neural network represents the kinetics of biological ...

  19. Time series prediction with simple recurrent neural networks ...

    African Journals Online (AJOL)

    Simple recurrent neural networks are widely used in time series prediction. Most researchers and application developers often choose arbitrarily between Elman or Jordan simple recurrent neural networks for their applications. A hybrid of the two called Elman-Jordan (or Multi-recurrent) neural network is also being used.

  20. Application of radial basis neural network for state estimation of ...

    African Journals Online (AJOL)

    user

    An original application of radial basis function (RBF) neural network for power system state estimation is proposed in this paper. The property of massive parallelism of neural networks is employed for this. The application of RBF neural network for state estimation is investigated by testing its applicability on a IEEE 14 bus ...

  1. New Neural Network Methods for Forecasting Regional Employment

    NARCIS (Netherlands)

    Patuelli, R.; Reggiani, A; Nijkamp, P.; Blien, U.

    2006-01-01

    In this paper, a set of neural network (NN) models is developed to compute short-term forecasts of regional employment patterns in Germany. Neural networks are modern statistical tools based on learning algorithms that are able to process large amounts of data. Neural networks are enjoying

  2. The Artifical Neural Network as means for modeling Nonlinear Systems

    OpenAIRE

    Drábek Oldøich; Taufer Ivan

    1998-01-01

    The paper deals with nonlinear system identification based on neural network. The topic of this publication is simulation of training and testing a neural network. A contribution is assigned to technologists which are good at the clasical identification problems but their knowledges about identification based on neural network are only on the stage of theoretical bases.

  3. The Artifical Neural Network as means for modeling Nonlinear Systems

    Directory of Open Access Journals (Sweden)

    Drábek Oldøich

    1998-12-01

    Full Text Available The paper deals with nonlinear system identification based on neural network. The topic of this publication is simulation of training and testing a neural network. A contribution is assigned to technologists which are good at the clasical identification problems but their knowledges about identification based on neural network are only on the stage of theoretical bases.

  4. Algorithm For A Self-Growing Neural Network

    Science.gov (United States)

    Cios, Krzysztof J.

    1996-01-01

    CID3 algorithm simulates self-growing neural network. Constructs decision trees equivalent to hidden layers of neural network. Based on ID3 algorithm, which dynamically generates decision tree while minimizing entropy of information. CID3 algorithm generates feedforward neural network by use of either crisp or fuzzy measure of entropy.

  5. Optical implementation of neural networks

    Science.gov (United States)

    Yu, Francis T. S.; Guo, Ruyan

    2002-12-01

    An adaptive optical neuro-computing (ONC) using inexpensive pocket size liquid crystal televisions (LCTVs) had been developed by the graduate students in the Electro-Optics Laboratory at The Pennsylvania State University. Although this neuro-computing has only 8×8=64 neurons, it can be easily extended to 16×20=320 neurons. The major advantages of this LCTV architecture as compared with other reported ONCs, are low cost and the flexibility to operate. To test the performance, several neural net models are used. These models are Interpattern Association, Hetero-association and unsupervised learning algorithms. The system design considerations and experimental demonstrations are also included.

  6. Identifying Jets Using Artifical Neural Networks

    Science.gov (United States)

    Rosand, Benjamin; Caines, Helen; Checa, Sofia

    2017-09-01

    We investigate particle jet interactions with the Quark Gluon Plasma (QGP) using artificial neural networks modeled on those used in computer image recognition. We create jet images by binning jet particles into pixels and preprocessing every image. We analyzed the jets with a Multi-layered maxout network and a convolutional network. We demonstrate each network's effectiveness in differentiating simulated quenched jets from unquenched jets, and we investigate the method that the network uses to discriminate among different quenched jet simulations. Finally, we develop a greater understanding of the physics behind quenched jets by investigating what the network learnt as well as its effectiveness in differentiating samples. Yale College Freshman Summer Research Fellowship in the Sciences and Engineering.

  7. Reliability Modeling of Microelectromechanical Systems Using Neural Networks

    Science.gov (United States)

    Perera. J. Sebastian

    2000-01-01

    Microelectromechanical systems (MEMS) are a broad and rapidly expanding field that is currently receiving a great deal of attention because of the potential to significantly improve the ability to sense, analyze, and control a variety of processes, such as heating and ventilation systems, automobiles, medicine, aeronautical flight, military surveillance, weather forecasting, and space exploration. MEMS are very small and are a blend of electrical and mechanical components, with electrical and mechanical systems on one chip. This research establishes reliability estimation and prediction for MEMS devices at the conceptual design phase using neural networks. At the conceptual design phase, before devices are built and tested, traditional methods of quantifying reliability are inadequate because the device is not in existence and cannot be tested to establish the reliability distributions. A novel approach using neural networks is created to predict the overall reliability of a MEMS device based on its components and each component's attributes. The methodology begins with collecting attribute data (fabrication process, physical specifications, operating environment, property characteristics, packaging, etc.) and reliability data for many types of microengines. The data are partitioned into training data (the majority) and validation data (the remainder). A neural network is applied to the training data (both attribute and reliability); the attributes become the system inputs and reliability data (cycles to failure), the system output. After the neural network is trained with sufficient data. the validation data are used to verify the neural networks provided accurate reliability estimates. Now, the reliability of a new proposed MEMS device can be estimated by using the appropriate trained neural networks developed in this work.

  8. Artificial neural networks as quantum associative memory

    Science.gov (United States)

    Hamilton, Kathleen; Schrock, Jonathan; Imam, Neena; Humble, Travis

    We present results related to the recall accuracy and capacity of Hopfield networks implemented on commercially available quantum annealers. The use of Hopfield networks and artificial neural networks as content-addressable memories offer robust storage and retrieval of classical information, however, implementation of these models using currently available quantum annealers faces several challenges: the limits of precision when setting synaptic weights, the effects of spurious spin-glass states and minor embedding of densely connected graphs into fixed-connectivity hardware. We consider neural networks which are less than fully-connected, and also consider neural networks which contain multiple sparsely connected clusters. We discuss the effect of weak edge dilution on the accuracy of memory recall, and discuss how the multiple clique structure affects the storage capacity. Our work focuses on storage of patterns which can be embedded into physical hardware containing n States Department of Defense and used resources of the Computational Research and Development Programs as Oak Ridge National Laboratory under Contract No. DE-AC0500OR22725 with the U. S. Department of Energy.

  9. Hybrid discrete-time neural networks.

    Science.gov (United States)

    Cao, Hongjun; Ibarz, Borja

    2010-11-13

    Hybrid dynamical systems combine evolution equations with state transitions. When the evolution equations are discrete-time (also called map-based), the result is a hybrid discrete-time system. A class of biological neural network models that has recently received some attention falls within this category: map-based neuron models connected by means of fast threshold modulation (FTM). FTM is a connection scheme that aims to mimic the switching dynamics of a neuron subject to synaptic inputs. The dynamic equations of the neuron adopt different forms according to the state (either firing or not firing) and type (excitatory or inhibitory) of their presynaptic neighbours. Therefore, the mathematical model of one such network is a combination of discrete-time evolution equations with transitions between states, constituting a hybrid discrete-time (map-based) neural network. In this paper, we review previous work within the context of these models, exemplifying useful techniques to analyse them. Typical map-based neuron models are low-dimensional and amenable to phase-plane analysis. In bursting models, fast-slow decomposition can be used to reduce dimensionality further, so that the dynamics of a pair of connected neurons can be easily understood. We also discuss a model that includes electrical synapses in addition to chemical synapses with FTM. Furthermore, we describe how master stability functions can predict the stability of synchronized states in these networks. The main results are extended to larger map-based neural networks.

  10. Matrix representation of a Neural Network

    DEFF Research Database (Denmark)

    Christensen, Bjørn Klint

    Processing, by David Rummelhart (Rummelhart 1986) for an easy-to-read introduction. What the paper does explain is how a matrix representation of a neural net allows for a very simple implementation. The matrix representation is introduced in (Rummelhart 1986, chapter 9), but only for a two-layer linear...... network and the feedforward algorithm. This paper develops the idea further to three-layer non-linear networks and the backpropagation algorithm. Figure 1 shows the layout of a three-layer network. There are I input nodes, J hidden nodes and K output nodes all indexed from 0. Bias-node for the hidden...

  11. Reconstruction of periodic signals using neural networks

    Directory of Open Access Journals (Sweden)

    José Danilo Rairán Antolines

    2014-01-01

    Full Text Available In this paper, we reconstruct a periodic signal by using two neural networks. The first network is trained to approximate the period of a signal, and the second network estimates the corresponding coefficients of the signal's Fourier expansion. The reconstruction strategy consists in minimizing the mean-square error via backpro-pagation algorithms over a single neuron with a sine transfer function. Additionally, this paper presents mathematical proof about the quality of the approximation as well as a first modification of the algorithm, which requires less data to reach the same estimation; thus making the algorithm suitable for real-time implementations.

  12. Neural networks: Application to medical imaging

    Science.gov (United States)

    Clarke, Laurence P.

    1994-01-01

    The research mission is the development of computer assisted diagnostic (CAD) methods for improved diagnosis of medical images including digital x-ray sensors and tomographic imaging modalities. The CAD algorithms include advanced methods for adaptive nonlinear filters for image noise suppression, hybrid wavelet methods for feature segmentation and enhancement, and high convergence neural networks for feature detection and VLSI implementation of neural networks for real time analysis. Other missions include (1) implementation of CAD methods on hospital based picture archiving computer systems (PACS) and information networks for central and remote diagnosis and (2) collaboration with defense and medical industry, NASA, and federal laboratories in the area of dual use technology conversion from defense or aerospace to medicine.

  13. Fuzzy logic and neural network technologies

    Science.gov (United States)

    Villarreal, James A.; Lea, Robert N.; Savely, Robert T.

    1992-01-01

    Applications of fuzzy logic technologies in NASA projects are reviewed to examine their advantages in the development of neural networks for aerospace and commercial expert systems and control. Examples of fuzzy-logic applications include a 6-DOF spacecraft controller, collision-avoidance systems, and reinforcement-learning techniques. The commercial applications examined include a fuzzy autofocusing system, an air conditioning system, and an automobile transmission application. The practical use of fuzzy logic is set in the theoretical context of artificial neural systems (ANSs) to give the background for an overview of ANS research programs at NASA. The research and application programs include the Network Execution and Training Simulator and faster training algorithms such as the Difference Optimized Training Scheme. The networks are well suited for pattern-recognition applications such as predicting sunspots, controlling posture maintenance, and conducting adaptive diagnoses.

  14. A Topological Perspective of Neural Network Structure

    Science.gov (United States)

    Sizemore, Ann; Giusti, Chad; Cieslak, Matthew; Grafton, Scott; Bassett, Danielle

    The wiring patterns of white matter tracts between brain regions inform functional capabilities of the neural network. Indeed, densely connected and cyclically arranged cognitive systems may communicate and thus perform distinctly. However, previously employed graph theoretical statistics are local in nature and thus insensitive to such global structure. Here we present an investigation of the structural neural network in eight healthy individuals using persistent homology. An extension of homology to weighted networks, persistent homology records both circuits and cliques (all-to-all connected subgraphs) through a repetitive thresholding process, thus perceiving structural motifs. We report structural features found across patients and discuss brain regions responsible for these patterns, finally considering the implications of such motifs in relation to cognitive function.

  15. Proceedings of the Neural Network Workshop for the Hanford Community

    Energy Technology Data Exchange (ETDEWEB)

    Keller, P.E.

    1994-01-01

    These proceedings were generated from a series of presentations made at the Neural Network Workshop for the Hanford Community. The abstracts and viewgraphs of each presentation are reproduced in these proceedings. This workshop was sponsored by the Computing and Information Sciences Department in the Molecular Science Research Center (MSRC) at the Pacific Northwest Laboratory (PNL). Artificial neural networks constitute a new information processing technology that is destined within the next few years, to provide the world with a vast array of new products. A major reason for this is that artificial neural networks are able to provide solutions to a wide variety of complex problems in a much simpler fashion than is possible using existing techniques. In recognition of these capabilities, many scientists and engineers are exploring the potential application of this new technology to their fields of study. An artificial neural network (ANN) can be a software simulation, an electronic circuit, optical system, or even an electro-chemical system designed to emulate some of the brain`s rudimentary structure as well as some of the learning processes that are believed to take place in the brain. For a very wide range of applications in science, engineering, and information technology, ANNs offer a complementary and potentially superior approach to that provided by conventional computing and conventional artificial intelligence. This is because, unlike conventional computers, which have to be programmed, ANNs essentially learn from experience and can be trained in a straightforward fashion to carry out tasks ranging from the simple to the highly complex.

  16. Tumor Diagnosis Using Backpropagation Neural Network Method

    Science.gov (United States)

    Ma, Lixing; Looney, Carl; Sukuta, Sydney; Bruch, Reinhard; Afanasyeva, Natalia

    1998-05-01

    For characterization of skin cancer, an artificial neural network (ANN) method has been developed to diagnose normal tissue, benign tumor and melanoma. The pattern recognition is based on a three-layer neural network fuzzy learning system. In this study, the input neuron data set is the Fourier Transform infrared (FT-IR)spectrum obtained by a new Fiberoptic Evanescent Wave Fourier Transform Infrared (FEW-FTIR) spectroscopy method in the range of 1480 to 1850 cm-1. Ten input features are extracted from the absorbency values in this region. A single hidden layer of neural nodes with sigmoids activation functions clusters the feature space into small subclasses and the output nodes are separated in different nonconvex classes to permit nonlinear discrimination of disease states. The output is classified as three classes: normal tissue, benign tumor and melanoma. The results obtained from the neural network pattern recognition are shown to be consistent with traditional medical diagnosis. Input features have also been extracted from the absorbency spectra using chemical factor analysis. These abstract features or factors are also used in the classification.

  17. Phase Diagram of Spiking Neural Networks

    Directory of Open Access Journals (Sweden)

    Hamed eSeyed-Allaei

    2015-03-01

    Full Text Available In computer simulations of spiking neural networks, often it is assumed that every two neurons of the network are connected by a probablilty of 2%, 20% of neurons are inhibitory and 80% are excitatory. These common values are based on experiments, observations. but here, I take a different perspective, inspired by evolution. I simulate many networks, each with a different set of parameters, and then I try to figure out what makes the common values desirable by nature. Networks which are configured according to the common values, have the best dynamic range in response to an impulse and their dynamic range is more robust in respect to synaptic weights. In fact, evolution has favored networks of best dynamic range. I present a phase diagram that shows the dynamic ranges of different networks of different parameteres. This phase diagram gives an insight into the space of parameters -- excitatory to inhibitory ratio, sparseness of connections and synaptic weights. It may serve as a guideline to decide about the values of parameters in a simulation of spiking neural network.

  18. Resolution of Singularities Introduced by Hierarchical Structure in Deep Neural Networks.

    Science.gov (United States)

    Nitta, Tohru

    2017-10-01

    We present a theoretical analysis of singular points of artificial deep neural networks, resulting in providing deep neural network models having no critical points introduced by a hierarchical structure. It is considered that such deep neural network models have good nature for gradient-based optimization. First, we show that there exist a large number of critical points introduced by a hierarchical structure in deep neural networks as straight lines, depending on the number of hidden layers and the number of hidden neurons. Second, we derive a sufficient condition for deep neural networks having no critical points introduced by a hierarchical structure, which can be applied to general deep neural networks. It is also shown that the existence of critical points introduced by a hierarchical structure is determined by the rank and the regularity of weight matrices for a specific class of deep neural networks. Finally, two kinds of implementation methods of the sufficient conditions to have no critical points are provided. One is a learning algorithm that can avoid critical points introduced by the hierarchical structure during learning (called avoidant learning algorithm). The other is a neural network that does not have some critical points introduced by the hierarchical structure as an inherent property (called avoidant neural network).

  19. Intrusion Detection System Using Deep Neural Network for In-Vehicle Network Security

    Science.gov (United States)

    Kang, Min-Joo

    2016-01-01

    A novel intrusion detection system (IDS) using a deep neural network (DNN) is proposed to enhance the security of in-vehicular network. The parameters building the DNN structure are trained with probability-based feature vectors that are extracted from the in-vehicular network packets. For a given packet, the DNN provides the probability of each class discriminating normal and attack packets, and, thus the sensor can identify any malicious attack to the vehicle. As compared to the traditional artificial neural network applied to the IDS, the proposed technique adopts recent advances in deep learning studies such as initializing the parameters through the unsupervised pre-training of deep belief networks (DBN), therefore improving the detection accuracy. It is demonstrated with experimental results that the proposed technique can provide a real-time response to the attack with a significantly improved detection ratio in controller area network (CAN) bus. PMID:27271802

  20. Intrusion Detection System Using Deep Neural Network for In-Vehicle Network Security.

    Science.gov (United States)

    Kang, Min-Joo; Kang, Je-Won

    2016-01-01

    A novel intrusion detection system (IDS) using a deep neural network (DNN) is proposed to enhance the security of in-vehicular network. The parameters building the DNN structure are trained with probability-based feature vectors that are extracted from the in-vehicular network packets. For a given packet, the DNN provides the probability of each class discriminating normal and attack packets, and, thus the sensor can identify any malicious attack to the vehicle. As compared to the traditional artificial neural network applied to the IDS, the proposed technique adopts recent advances in deep learning studies such as initializing the parameters through the unsupervised pre-training of deep belief networks (DBN), therefore improving the detection accuracy. It is demonstrated with experimental results that the proposed technique can provide a real-time response to the attack with a significantly improved detection ratio in controller area network (CAN) bus.

  1. Intrusion Detection System Using Deep Neural Network for In-Vehicle Network Security.

    Directory of Open Access Journals (Sweden)

    Min-Joo Kang

    Full Text Available A novel intrusion detection system (IDS using a deep neural network (DNN is proposed to enhance the security of in-vehicular network. The parameters building the DNN structure are trained with probability-based feature vectors that are extracted from the in-vehicular network packets. For a given packet, the DNN provides the probability of each class discriminating normal and attack packets, and, thus the sensor can identify any malicious attack to the vehicle. As compared to the traditional artificial neural network applied to the IDS, the proposed technique adopts recent advances in deep learning studies such as initializing the parameters through the unsupervised pre-training of deep belief networks (DBN, therefore improving the detection accuracy. It is demonstrated with experimental results that the proposed technique can provide a real-time response to the attack with a significantly improved detection ratio in controller area network (CAN bus.

  2. Character Recognition Using Genetically Trained Neural Networks

    Energy Technology Data Exchange (ETDEWEB)

    Diniz, C.; Stantz, K.M.; Trahan, M.W.; Wagner, J.S.

    1998-10-01

    Computationally intelligent recognition of characters and symbols addresses a wide range of applications including foreign language translation and chemical formula identification. The combination of intelligent learning and optimization algorithms with layered neural structures offers powerful techniques for character recognition. These techniques were originally developed by Sandia National Laboratories for pattern and spectral analysis; however, their ability to optimize vast amounts of data make them ideal for character recognition. An adaptation of the Neural Network Designer soflsvare allows the user to create a neural network (NN_) trained by a genetic algorithm (GA) that correctly identifies multiple distinct characters. The initial successfid recognition of standard capital letters can be expanded to include chemical and mathematical symbols and alphabets of foreign languages, especially Arabic and Chinese. The FIN model constructed for this project uses a three layer feed-forward architecture. To facilitate the input of characters and symbols, a graphic user interface (GUI) has been developed to convert the traditional representation of each character or symbol to a bitmap. The 8 x 8 bitmap representations used for these tests are mapped onto the input nodes of the feed-forward neural network (FFNN) in a one-to-one correspondence. The input nodes feed forward into a hidden layer, and the hidden layer feeds into five output nodes correlated to possible character outcomes. During the training period the GA optimizes the weights of the NN until it can successfully recognize distinct characters. Systematic deviations from the base design test the network's range of applicability. Increasing capacity, the number of letters to be recognized, requires a nonlinear increase in the number of hidden layer neurodes. Optimal character recognition performance necessitates a minimum threshold for the number of cases when genetically training the net. And, the

  3. Deep Gate Recurrent Neural Network

    Science.gov (United States)

    2016-11-22

    distribution, e.g. a particular book. In this experiment, we use a collection of writings by Nietzsche to train our network. In total, this corpus contains...sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics : Human Language Technologies, pages 142–150...Portland, Oregon, USA, June 2011. Association for Com- putational Linguistics . URL http://www.aclweb.org/anthology/P11-1015. Maja J Matari, Complex

  4. Neural Networks for Beat Perception in Musical Rhythm.

    Science.gov (United States)

    Large, Edward W; Herrera, Jorge A; Velasco, Marc J

    2015-01-01

    Entrainment of cortical rhythms to acoustic rhythms has been hypothesized to be the neural correlate of pulse and meter perception in music. Dynamic attending theory first proposed synchronization of endogenous perceptual rhythms nearly 40 years ago, but only recently has the pivotal role of neural synchrony been demonstrated. Significant progress has since been made in understanding the role of neural oscillations and the neural structures that support synchronized responses to musical rhythm. Synchronized neural activity has been observed in auditory and motor networks, and has been linked with attentional allocation and movement coordination. Here we describe a neurodynamic model that shows how self-organization of oscillations in interacting sensory and motor networks could be responsible for the formation of the pulse percept in complex rhythms. In a pulse synchronization study, we test the model's key prediction that pulse can be perceived at a frequency for which no spectral energy is present in the amplitude envelope of the acoustic rhythm. The result shows that participants perceive the pulse at the theoretically predicted frequency. This model is one of the few consistent with neurophysiological evidence on the role of neural oscillation, and it explains a phenomenon that other computational models fail to explain. Because it is based on a canonical model, the predictions hold for an entire family of dynamical systems, not only a specific one. Thus, this model provides a theoretical link between oscillatory neurodynamics and the induction of pulse and meter in musical rhythm.

  5. Neural Network Based Intelligent Sootblowing System

    Energy Technology Data Exchange (ETDEWEB)

    Mark Rhode

    2005-04-01

    . Due to the composition of coal, particulate matter is also a by-product of coal combustion. Modern day utility boilers are usually fitted with electrostatic precipitators to aid in the collection of particulate matter. Although extremely efficient, these devices are sensitive to rapid changes in inlet mass concentration as well as total mass loading. Traditionally, utility boilers are equipped with devices known as sootblowers, which use, steam, water or air to dislodge and clean the surfaces within the boiler and are operated based upon established rule or operator's judgment. Poor sootblowing regimes can influence particulate mass loading to the electrostatic precipitators. The project applied a neural network intelligent sootblowing system in conjunction with state-of-the-art controls and instruments to optimize the operation of a utility boiler and systematically control boiler slagging/fouling. This optimization process targeted reduction of NOx of 30%, improved efficiency of 2% and a reduction in opacity of 5%. The neural network system proved to be a non-invasive system which can readily be adapted to virtually any utility boiler. Specific conclusions from this neural network application are listed below. These conclusions should be used in conjunction with the specific details provided in the technical discussions of this report to develop a thorough understanding of the process.

  6. Investment Valuation Analysis with Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Hüseyin İNCE

    2017-07-01

    Full Text Available This paper shows that discounted cash flow and net present value, which are traditional investment valuation models, can be combined with artificial neural network model forecasting. The main inputs for the valuation models, such as revenue, costs, capital expenditure, and their growth rates, are heavily related to sector dynamics and macroeconomics. The growth rates of those inputs are related to inflation and exchange rates. Therefore, predicting inflation and exchange rates is a critical issue for the valuation output. In this paper, the Turkish economy’s inflation rate and the exchange rate of USD/TRY are forecast by artificial neural networks and implemented to the discounted cash flow model. Finally, the results are benchmarked with conventional practices.

  7. Evaluating neural networks and artificial intelligence systems

    Science.gov (United States)

    Alberts, David S.

    1994-02-01

    Systems have no intrinsic value in and of themselves, but rather derive value from the contributions they make to the missions, decisions, and tasks they are intended to support. The estimation of the cost-effectiveness of systems is a prerequisite for rational planning, budgeting, and investment documents. Neural network and expert system applications, although similar in their incorporation of a significant amount of decision-making capability, differ from each other in ways that affect the manner in which they can be evaluated. Both these types of systems are, by definition, evolutionary systems, which also impacts their evaluation. This paper discusses key aspects of neural network and expert system applications and their impact on the evaluation process. A practical approach or methodology for evaluating a certain class of expert systems that are particularly difficult to measure using traditional evaluation approaches is presented.

  8. Artificial Neural Network for Displacement Vectors Determination

    Directory of Open Access Journals (Sweden)

    P. Bohmann

    1997-09-01

    Full Text Available An artificial neural network (NN for displacement vectors (DV determination is presented in this paper. DV are computed in areas which are essential for image analysis and computer vision, in areas where are edges, lines, corners etc. These special features are found by edges operators with the following filtration. The filtration is performed by a threshold function. The next step is DV computation by 2D Hamming artificial neural network. A method of DV computation is based on the full search block matching algorithms. The pre-processing (edges finding is the reason why the correlation function is very simple, the process of DV determination needs less computation and the structure of the NN is simpler.

  9. Neural Network Program Package for Prosody Modeling

    Directory of Open Access Journals (Sweden)

    J. Santarius

    2004-04-01

    Full Text Available This contribution describes the programme for one part of theautomatic Text-to-Speech (TTS synthesis. Some experiments (for example[14] documented the considerable improvement of the naturalness ofsynthetic speech, but this approach requires completing the inputfeature values by hand. This completing takes a lot of time for bigfiles. We need to improve the prosody by other approaches which useonly automatically classified features (input parameters. Theartificial neural network (ANN approach is used for the modeling ofprosody parameters. The program package contains all modules necessaryfor the text and speech signal pre-processing, neural network training,sensitivity analysis, result processing and a module for the creationof the input data protocol for Czech speech synthesizer ARTIC [1].

  10. Supervised Sequence Labelling with Recurrent Neural Networks

    CERN Document Server

    Graves, Alex

    2012-01-01

    Supervised sequence labelling is a vital area of machine learning, encompassing tasks such as speech, handwriting and gesture recognition, protein secondary structure prediction and part-of-speech tagging. Recurrent neural networks are powerful sequence learning tools—robust to input noise and distortion, able to exploit long-range contextual information—that would seem ideally suited to such problems. However their role in large-scale sequence labelling systems has so far been auxiliary.    The goal of this book is a complete framework for classifying and transcribing sequential data with recurrent neural networks only. Three main innovations are introduced in order to realise this goal. Firstly, the connectionist temporal classification output layer allows the framework to be trained with unsegmented target sequences, such as phoneme-level speech transcriptions; this is in contrast to previous connectionist approaches, which were dependent on error-prone prior segmentation. Secondly, multidimensional...

  11. Hierarchical Neural Network Structures for Phoneme Recognition

    CERN Document Server

    Vasquez, Daniel; Minker, Wolfgang

    2013-01-01

    In this book, hierarchical structures based on neural networks are investigated for automatic speech recognition. These structures are evaluated on the phoneme recognition task where a  Hybrid Hidden Markov Model/Artificial Neural Network paradigm is used. The baseline hierarchical scheme consists of two levels each which is based on a Multilayered Perceptron. Additionally, the output of the first level serves as a second level input. The computational speed of the phoneme recognizer can be substantially increased by removing redundant information still contained at the first level output. Several techniques based on temporal and phonetic criteria have been investigated to remove this redundant information. The computational time could be reduced by 57% whilst keeping the system accuracy comparable to the baseline hierarchical approach.

  12. Evaluating Functional Autocorrelation within Spatially Distributed Neural Processing Networks*

    Science.gov (United States)

    Derado, Gordana; Bowman, F. Dubois; Ely, Timothy D.; Kilts, Clinton D.

    2010-01-01

    Data-driven statistical approaches, such as cluster analysis or independent component analysis, applied to in vivo functional neuroimaging data help to identify neural processing networks that exhibit similar task-related or restingstate patterns of activity. Ideally, the measured brain activity for voxels within such networks should exhibit high autocorrelation. An important limitation is that the algorithms do not typically quantify or statistically test the strength or nature of the within-network relatedness between voxels. To extend the results given by such data-driven analyses, we propose the use of Moran’s I statistic to measure the degree of functional autocorrelation within identified neural processing networks and to evaluate the statistical significance of the observed associations. We adapt the conventional definition of Moran’s I, for applicability to neuroimaging analyses, by defining the global autocorrelation index using network-based neighborhoods. Also, we compute network-specific contributions to the overall autocorrelation. We present results from a bootstrap analysis that provide empirical support for the use of our hypothesis testing framework. We illustrate our methodology using positron emission tomography (PET) data from a study that examines the neural representation of working memory among individuals with schizophrenia and functional magnetic resonance imaging (fMRI) data from a study of depression. PMID:21643436

  13. Neural Network Solves "Traveling-Salesman" Problem

    Science.gov (United States)

    Thakoor, Anilkumar P.; Moopenn, Alexander W.

    1990-01-01

    Experimental electronic neural network solves "traveling-salesman" problem. Plans round trip of minimum distance among N cities, visiting every city once and only once (without backtracking). This problem is paradigm of many problems of global optimization (e.g., routing or allocation of resources) occuring in industry, business, and government. Applied to large number of cities (or resources), circuits of this kind expected to solve problem faster and more cheaply.

  14. Learning in Neural Networks: VLSI Implementation Strategies

    Science.gov (United States)

    Duong, Tuan Anh

    1995-01-01

    Fully-parallel hardware neural network implementations may be applied to high-speed recognition, classification, and mapping tasks in areas such as vision, or can be used as low-cost self-contained units for tasks such as error detection in mechanical systems (e.g. autos). Learning is required not only to satisfy application requirements, but also to overcome hardware-imposed limitations such as reduced dynamic range of connections.

  15. Convolutional Neural Networks for Font Classification

    OpenAIRE

    Tensmeyer, Chris; Saunders, Daniel; Martinez, Tony

    2017-01-01

    Classifying pages or text lines into font categories aids transcription because single font Optical Character Recognition (OCR) is generally more accurate than omni-font OCR. We present a simple framework based on Convolutional Neural Networks (CNNs), where a CNN is trained to classify small patches of text into predefined font classes. To classify page or line images, we average the CNN predictions over densely extracted patches. We show that this method achieves state-of-the-art performance...

  16. Deep Learning in Neural Networks: An Overview

    OpenAIRE

    Schmidhuber, Juergen

    2014-01-01

    In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarises relevant work, much of it from the previous millennium. Shallow and deep learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpr...

  17. A Dynamic Neural Network Approach to CBM

    Science.gov (United States)

    2011-03-15

    Therefore post-processing is needed to extract the time difference between corresponding events from which to calculate the crankshaft rotational speed...potentially already available from existing sensors (such as a crankshaft timing device) and a Neural Network processor to carry out the calculation . As...files are designated with the “_genmod” suffix. These files were the sources for the training and testing sets and made the extraction process easy

  18. Artificial neural network cardiopulmonary modeling and diagnosis

    Science.gov (United States)

    Kangas, Lars J.; Keller, Paul E.

    1997-01-01

    The present invention is a method of diagnosing a cardiopulmonary condition in an individual by comparing data from a progressive multi-stage test for the individual to a non-linear multi-variate model, preferably a recurrent artificial neural network having sensor fusion. The present invention relies on a cardiovascular model developed from physiological measurements of an individual. Any differences between the modeled parameters and the parameters of an individual at a given time are used for diagnosis.

  19. Hardware Neural Networks Modeling for Computing Different Performance Parameters of Rectangular, Circular, and Triangular Microstrip Antennas

    Directory of Open Access Journals (Sweden)

    Taimoor Khan

    2014-01-01

    Full Text Available In the last one decade, neural networks-based modeling has been used for computing different performance parameters of microstrip antennas because of learning and generalization features. Most of the created neural models are based on software simulation. As the neural networks show massive parallelism inherently, a parallel hardware needs to be created for creating faster computing machine by taking the advantages of the parallelism of the neural networks. This paper demonstrates a generalized neural networks model created on field programmable gate array- (FPGA- based reconfigurable hardware platform for computing different performance parameters of microstrip antennas. Thus, the proposed approach provides a platform for developing low-cost neural network-based FPGA simulators for microwave applications. Also, the results obtained by this approach are in very good agreement with the measured results available in the literature.

  20. Predicting company growth using logistic regression and neural networks

    Directory of Open Access Journals (Sweden)

    Marijana Zekić-Sušac

    2016-12-01

    Full Text Available The paper aims to establish an efficient model for predicting company growth by leveraging the strengths of logistic regression and neural networks. A real dataset of Croatian companies was used which described the relevant industry sector, financial ratios, income, and assets in the input space, with a dependent binomial variable indicating whether a company had high-growth if it had annualized growth in assets by more than 20% a year over a three-year period. Due to a large number of input variables, factor analysis was performed in the pre -processing stage in order to extract the most important input components. Building an efficient model with a high classification rate and explanatory ability required application of two data mining methods: logistic regression as a parametric and neural networks as a non -parametric method. The methods were tested on the models with and without variable reduction. The classification accuracy of the models was compared using statistical tests and ROC curves. The results showed that neural networks produce a significantly higher classification accuracy in the model when incorporating all available variables. The paper further discusses the advantages and disadvantages of both approaches, i.e. logistic regression and neural networks in modelling company growth. The suggested model is potentially of benefit to investors and economic policy makers as it provides support for recognizing companies with growth potential, especially during times of economic downturn.

  1. Spiking neural network-based control chart pattern recognition

    Directory of Open Access Journals (Sweden)

    Medhat H.A. Awadalla

    2012-03-01

    Full Text Available Due to an increasing competition in products, consumers have become more critical in choosing products. The quality of products has become more important. Statistical Process Control (SPC is usually used to improve the quality of products. Control charting plays the most important role in SPC. Control charts help to monitor the behavior of the process to determine whether it is stable or not. Unnatural patterns in control charts mean that there are some unnatural causes for variations in SPC. Spiking neural networks (SNNs are the third generation of artificial neural networks that consider time as an important feature for information representation and processing. In this paper, a spiking neural network architecture is proposed to be used for control charts pattern recognition (CCPR. Furthermore, enhancements to the SpikeProp learning algorithm are proposed. These enhancements provide additional learning rules for the synaptic delays, time constants and for the neurons thresholds. Simulated experiments have been conducted and the achieved results show a remarkable improvement in the overall performance compared with artificial neural networks.

  2. Multilingual Text Detection with Nonlinear Neural Network

    Directory of Open Access Journals (Sweden)

    Lin Li

    2015-01-01

    Full Text Available Multilingual text detection in natural scenes is still a challenging task in computer vision. In this paper, we apply an unsupervised learning algorithm to learn language-independent stroke feature and combine unsupervised stroke feature learning and automatically multilayer feature extraction to improve the representational power of text feature. We also develop a novel nonlinear network based on traditional Convolutional Neural Network that is able to detect multilingual text regions in the images. The proposed method is evaluated on standard benchmarks and multilingual dataset and demonstrates improvement over the previous work.

  3. MHC haplotype analysis by artificial neural networks.

    Science.gov (United States)

    Bellgard, M I; Tay, G K; Hiew, H L; Witt, C S; Ketheesan, N; Christiansen, F T; Dawkins, R L

    1998-01-01

    Conventional matching is based on numbers of alleles shared between donor and recipient. This approach, however, ignores the degree of relationship between alleles and haplotypes, and therefore the actual degree of difference. To address this problem, we have compared family members using a block matching technique which reflects differences in genomic sequences. All parents and siblings had been genotyped using conventional MHC typing so that haplotypes could be assigned and relatives could be classified as sharing 0, 1 or 2 haplotypes. We trained an Artificial Neural Network (ANN) with subjects from 6 families (85 comparisons) to distinguish between relatives. Using the outputs of the ANN, we developed a score, the Histocompatibility Index (HI), as a measure of the degree of difference. Subjects from a further 3 families (106 profile comparisons) were tested. The HI score for each comparison was plotted. We show that the HI score is trimodal allowing the definition of three populations corresponding to approximately 0, 1 or 2 haplotype sharing. The means and standard deviations of the three populations were found. As expected, comparisons between family members sharing 2 haplotypes resulted in high HI scores with one exception. More interestingly, this approach distinguishes between the 1 and 0 haplotype groups, with some informative exceptions. This distinction was considered too difficult to attempt visually. The approach provides promise in the quantification of degrees of histocompatibility.

  4. Identifying Broadband Rotational Spectra with Neural Networks

    Science.gov (United States)

    Zaleski, Daniel P.; Prozument, Kirill

    2017-06-01

    A typical broadband rotational spectrum may contain several thousand observable transitions, spanning many species. Identifying the individual spectra, particularly when the dynamic range reaches 1,000:1 or even 10,000:1, can be challenging. One approach is to apply automated fitting routines. In this approach, combinations of 3 transitions can be created to form a "triple", which allows fitting of the A, B, and C rotational constants in a Watson-type Hamiltonian. On a standard desktop computer, with a target molecule of interest, a typical AUTOFIT routine takes 2-12 hours depending on the spectral density. A new approach is to utilize machine learning to train a computer to recognize the patterns (frequency spacing and relative intensities) inherit in rotational spectra and to identify the individual spectra in a raw broadband rotational spectrum. Here, recurrent neural networks have been trained to identify different types of rotational spectra and classify them accordingly. Furthermore, early results in applying convolutional neural networks for spectral object recognition in broadband rotational spectra appear promising. Perez et al. "Broadband Fourier transform rotational spectroscopy for structure determination: The water heptamer." Chem. Phys. Lett., 2013, 571, 1-15. Seifert et al. "AUTOFIT, an Automated Fitting Tool for Broadband Rotational Spectra, and Applications to 1-Hexanal." J. Mol. Spectrosc., 2015, 312, 13-21. Bishop. "Neural networks for pattern recognition." Oxford university press, 1995.

  5. Artificial Neural Network Model for Predicting Compressive

    Directory of Open Access Journals (Sweden)

    Salim T. Yousif

    2013-05-01

    Full Text Available   Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature.    The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor  affecting the output of the model.     The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.

  6. Improved Extension Neural Network and Its Applications

    Directory of Open Access Journals (Sweden)

    Yu Zhou

    2014-01-01

    Full Text Available Extension neural network (ENN is a new neural network that is a combination of extension theory and artificial neural network (ANN. The learning algorithm of ENN is based on supervised learning algorithm. One of important issues in the field of classification and recognition of ENN is how to achieve the best possible classifier with a small number of labeled training data. Training data selection is an effective approach to solve this issue. In this work, in order to improve the supervised learning performance and expand the engineering application range of ENN, we use a novel data selection method based on shadowed sets to refine the training data set of ENN. Firstly, we use clustering algorithm to label the data and induce shadowed sets. Then, in the framework of shadowed sets, the samples located around each cluster centers (core data and the borders between clusters (boundary data are selected as training data. Lastly, we use selected data to train ENN. Compared with traditional ENN, the proposed improved ENN (IENN has a better performance. Moreover, IENN is independent of the supervised learning algorithms and initial labeled data. Experimental results verify the effectiveness and applicability of our proposed work.

  7. CALIBRATION OF ONLINE ANALYZERS USING NEURAL NETWORKS

    Energy Technology Data Exchange (ETDEWEB)

    Rajive Ganguli; Daniel E. Walsh; Shaohai Yu

    2003-12-05

    Neural networks were used to calibrate an online ash analyzer at the Usibelli Coal Mine, Healy, Alaska, by relating the Americium and Cesium counts to the ash content. A total of 104 samples were collected from the mine, with 47 being from screened coal, and the rest being from unscreened coal. Each sample corresponded to 20 seconds of coal on the running conveyor belt. Neural network modeling used the quick stop training procedure. Therefore, the samples were split into training, calibration and prediction subsets. Special techniques, using genetic algorithms, were developed to representatively split the sample into the three subsets. Two separate approaches were tried. In one approach, the screened and unscreened coal was modeled separately. In another, a single model was developed for the entire dataset. No advantage was seen from modeling the two subsets separately. The neural network method performed very well on average but not individually, i.e. though each prediction was unreliable, the average of a few predictions was close to the true average. Thus, the method demonstrated that the analyzers were accurate at 2-3 minutes intervals (average of 6-9 samples), but not at 20 seconds (each prediction).

  8. UAV Trajectory Modeling Using Neural Networks

    Science.gov (United States)

    Xue, Min

    2017-01-01

    Massive small unmanned aerial vehicles are envisioned to operate in the near future. While there are lots of research problems need to be addressed before dense operations can happen, trajectory modeling remains as one of the keys to understand and develop policies, regulations, and requirements for safe and efficient unmanned aerial vehicle operations. The fidelity requirement of a small unmanned vehicle trajectory model is high because these vehicles are sensitive to winds due to their small size and low operational altitude. Both vehicle control systems and dynamic models are needed for trajectory modeling, which makes the modeling a great challenge, especially considering the fact that manufactures are not willing to share their control systems. This work proposed to use a neural network approach for modelling small unmanned vehicle's trajectory without knowing its control system and bypassing exhaustive efforts for aerodynamic parameter identification. As a proof of concept, instead of collecting data from flight tests, this work used the trajectory data generated by a mathematical vehicle model for training and testing the neural network. The results showed great promise because the trained neural network can predict 4D trajectories accurately, and prediction errors were less than 2:0 meters in both temporal and spatial dimensions.

  9. A neural network model for texture discrimination.

    Science.gov (United States)

    Xing, J; Gerstein, G L

    1993-01-01

    A model of texture discrimination in visual cortex was built using a feedforward network with lateral interactions among relatively realistic spiking neural elements. The elements have various membrane currents, equilibrium potentials and time constants, with action potentials and synapses. The model is derived from the modified programs of MacGregor (1987). Gabor-like filters are applied to overlapping regions in the original image; the neural network with lateral excitatory and inhibitory interactions then compares and adjusts the Gabor amplitudes in order to produce the actual texture discrimination. Finally, a combination layer selects and groups various representations in the output of the network to form the final transformed image material. We show that both texture segmentation and detection of texture boundaries can be represented in the firing activity of such a network for a wide variety of synthetic to natural images. Performance details depend most strongly on the global balance of strengths of the excitatory and inhibitory lateral interconnections. The spatial distribution of lateral connective strengths has relatively little effect. Detailed temporal firing activities of single elements in the lateral connected network were examined under various stimulus conditions. Results show (as in area 17 of cortex) that a single element's response to image features local to its receptive field can be altered by changes in the global context.

  10. Categorization in neural networks and prosopagnosia

    Science.gov (United States)

    Virasoro, M. A.

    1989-12-01

    Prosopagnosia is a syndrome characterized by a generalized difficulty to visually recognize individual patterns among those that are similar, and can therefore be said to belong to the same category. I suggest that the existence of this disfunction may be an important clue for understanding the categorization process in the brain. In this direction the performance of neural networks under random destruction of synapses is analysed. It is found that in almost every network that stores correlated patterns the coding of the discriminating details between individuals inside a class is more sensitive to noise or to random destruction than the coding that distinguishes between classes. It follows that a process of death and/or deterioration at an intermediate level of intensity, even if it acts randomly on the network may lead to a malfunctioning of the network that resembles prosopagnosia.

  11. Artificial Neural Network Analysis of Xinhui Pericarpium Citri ...

    African Journals Online (AJOL)

    Purpose: To develop an effective analytical method to distinguish old peels of Xinhui Pericarpium citri reticulatae (XPCR) stored for > 3 years from new peels stored for < 3 years. Methods: Artificial neural networks (ANN) models, including general regression neural network (GRNN) and multi-layer feedforward neural ...

  12. Dynamic neural network-based methods for compensation of nonlinear effects in multimode communication lines

    Science.gov (United States)

    Sidelnikov, O. S.; Redyuk, A. A.; Sygletos, S.

    2017-12-01

    We consider neural network-based schemes of digital signal processing. It is shown that the use of a dynamic neural network-based scheme of signal processing ensures an increase in the optical signal transmission quality in comparison with that provided by other methods for nonlinear distortion compensation.

  13. The role of symmetry in neural networks and their Laplacian spectra

    NARCIS (Netherlands)

    de Lange, Siemon C.|info:eu-repo/dai/nl/41392002X; van den Heuvel, Martijn P.|info:eu-repo/dai/nl/304820466; de Reus, Marcel A.|info:eu-repo/dai/nl/413970728

    2016-01-01

    Human and animal nervous systems constitute complexly wired networks that form the infrastructure for neural processing and integration of information. The organization of these neural networks can be analyzed using the so-called Laplacian spectrum, providing a mathematical tool to produce

  14. Prediction of Modal Shift Using Artificial Neural Networks

    OpenAIRE

    Kadir Akgöl; Metin Mutlu Aydin; Özcan Asilkan; Banihan Günay

    2014-01-01

    Various public transport concepts have been developed to provide solutions to the ever growing problem of traffic in modern times. For instance, intelligent subscription bus service is one of them. This concept aims to provide a means of transport at near private car comfort as well as at near public transport cost. By this means, a shift from other modes of transport, especially private car, to public transport is aimed to encourage use of public transport. An artificial neural network model...

  15. Electroencephalography epilepsy classifications using hybrid cuckoo search and neural network

    Science.gov (United States)

    Pratiwi, A. B.; Damayanti, A.; Miswanto

    2017-07-01

    Epilepsy is a condition that affects the brain and causes repeated seizures. This seizure is episodes that can vary and nearly undetectable to long periods of vigorous shaking or brain contractions. Epilepsy often can be confirmed with an electrocephalography (EEG). Neural Networks has been used in biomedic signal analysis, it has successfully classified the biomedic signal, such as EEG signal. In this paper, a hybrid cuckoo search and neural network are used to recognize EEG signal for epilepsy classifications. The weight of the multilayer perceptron is optimized by the cuckoo search algorithm based on its error. The aim of this methods is making the network faster to obtained the local or global optimal then the process of classification become more accurate. Based on the comparison results with the traditional multilayer perceptron, the hybrid cuckoo search and multilayer perceptron provides better performance in term of error convergence and accuracy. The purpose methods give MSE 0.001 and accuracy 90.0 %.

  16. UAV Trajectory Modeling Using Neural Networks

    Science.gov (United States)

    Xue, Min

    2017-01-01

    Large amount of small Unmanned Aerial Vehicles (sUAVs) are projected to operate in the near future. Potential sUAV applications include, but not limited to, search and rescue, inspection and surveillance, aerial photography and video, precision agriculture, and parcel delivery. sUAVs are expected to operate in the uncontrolled Class G airspace, which is at or below 500 feet above ground level (AGL), where many static and dynamic constraints exist, such as ground properties and terrains, restricted areas, various winds, manned helicopters, and conflict avoidance among sUAVs. How to enable safe, efficient, and massive sUAV operations at the low altitude airspace remains a great challenge. NASA's Unmanned aircraft system Traffic Management (UTM) research initiative works on establishing infrastructure and developing policies, requirement, and rules to enable safe and efficient sUAVs' operations. To achieve this goal, it is important to gain insights of future UTM traffic operations through simulations, where the accurate trajectory model plays an extremely important role. On the other hand, like what happens in current aviation development, trajectory modeling should also serve as the foundation for any advanced concepts and tools in UTM. Accurate models of sUAV dynamics and control systems are very important considering the requirement of the meter level precision in UTM operations. The vehicle dynamics are relatively easy to derive and model, however, vehicle control systems remain unknown as they are usually kept by manufactures as a part of intellectual properties. That brings challenges to trajectory modeling for sUAVs. How to model the vehicle's trajectories with unknown control system? This work proposes to use a neural network to model a vehicle's trajectory. The neural network is first trained to learn the vehicle's responses at numerous conditions. Once being fully trained, given current vehicle states, winds, and desired future trajectory, the neural

  17. Adaptive model predictive process control using neural networks

    Science.gov (United States)

    Buescher, K.L.; Baum, C.C.; Jones, R.D.

    1997-08-19

    A control system for controlling the output of at least one plant process output parameter is implemented by adaptive model predictive control using a neural network. An improved method and apparatus provides for sampling plant output and control input at a first sampling rate to provide control inputs at the fast rate. The MPC system is, however, provided with a network state vector that is constructed at a second, slower rate so that the input control values used by the MPC system are averaged over a gapped time period. Another improvement is a provision for on-line training that may include difference training, curvature training, and basis center adjustment to maintain the weights and basis centers of the neural in an updated state that can follow changes in the plant operation apart from initial off-line training data. 46 figs.

  18. A comparative performance evaluation of neural network based approach for sentiment classification of online reviews

    Directory of Open Access Journals (Sweden)

    G. Vinodhini

    2016-01-01

    Full Text Available The aim of sentiment classification is to efficiently identify the emotions expressed in the form of text messages. Machine learning methods for sentiment classification have been extensively studied, due to their predominant classification performance. Recent studies suggest that ensemble based machine learning methods provide better performance in classification. Artificial neural networks (ANNs are rarely being investigated in the literature of sentiment classification. This paper compares neural network based sentiment classification methods (back propagation neural network (BPN, probabilistic neural network (PNN & homogeneous ensemble of PNN (HEN using varying levels of word granularity as features for feature level sentiment classification. They are validated using a dataset of product reviews collected from the Amazon reviews website. An empirical analysis is done to compare results of ANN based methods with two statistical individual methods. The methods are evaluated using five different quality measures and results show that the homogeneous ensemble of the neural network method provides better performance. Among the two neural network approaches used, probabilistic neural networks (PNNs outperform in classifying the sentiment of the product reviews. The integration of neural network based sentiment classification methods with principal component analysis (PCA as a feature reduction technique provides superior performance in terms of training time also.

  19. Evolutionary Algorithms For Neural Networks Binary And Real Data Classification

    Directory of Open Access Journals (Sweden)

    Dr. Hanan A.R. Akkar

    2015-08-01

    Full Text Available Artificial neural networks are complex networks emulating the way human rational neurons process data. They have been widely used generally in prediction clustering classification and association. The training algorithms that used to determine the network weights are almost the most important factor that influence the neural networks performance. Recently many meta-heuristic and Evolutionary algorithms are employed to optimize neural networks weights to achieve better neural performance. This paper aims to use recently proposed algorithms for optimizing neural networks weights comparing these algorithms performance with other classical meta-heuristic algorithms used for the same purpose. However to evaluate the performance of such algorithms for training neural networks we examine such algorithms to classify four opposite binary XOR clusters and classification of continuous real data sets such as Iris and Ecoli.

  20. Runoff Modelling in Urban Storm Drainage by Neural Networks

    DEFF Research Database (Denmark)

    Rasmussen, Michael R.; Brorsen, Michael; Schaarup-Jensen, Kjeld

    1995-01-01

    A neural network is used to simulate folw and water levels in a sewer system. The calibration of th neural network is based on a few measured events and the network is validated against measureed events as well as flow simulated with the MOUSE model (Lindberg and Joergensen, 1986). The neural...... network is used to compute flow or water level at selected points in the sewer system, and to forecast the flow from a small residential area. The main advantages of the neural network are the build-in self calibration procedure and high speed performance, but the neural network cannot be used to extract...... knowledge of the runoff process. The neural network was found to simulate 150 times faster than e.g. the MOUSE model....

  1. Network traffic anomaly prediction using Artificial Neural Network

    Science.gov (United States)

    Ciptaningtyas, Hening Titi; Fatichah, Chastine; Sabila, Altea

    2017-03-01

    As the excessive increase of internet usage, the malicious software (malware) has also increase significantly. Malware is software developed by hacker for illegal purpose(s), such as stealing data and identity, causing computer damage, or denying service to other user[1]. Malware which attack computer or server often triggers network traffic anomaly phenomena. Based on Sophos's report[2], Indonesia is the riskiest country of malware attack and it also has high network traffic anomaly. This research uses Artificial Neural Network (ANN) to predict network traffic anomaly based on malware attack in Indonesia which is recorded by Id-SIRTII/CC (Indonesia Security Incident Response Team on Internet Infrastructure/Coordination Center). The case study is the highest malware attack (SQL injection) which has happened in three consecutive years: 2012, 2013, and 2014[4]. The data series is preprocessed first, then the network traffic anomaly is predicted using Artificial Neural Network and using two weight update algorithms: Gradient Descent and Momentum. Error of prediction is calculated using Mean Squared Error (MSE) [7]. The experimental result shows that MSE for SQL Injection is 0.03856. So, this approach can be used to predict network traffic anomaly.

  2. Adaptive nonlinear control of missiles using neural networks

    Science.gov (United States)

    McFarland, Michael Bryan

    Research has shown that neural networks can be used to improve upon approximate dynamic inversion for control of uncertain nonlinear systems. In one architecture, the neural network adaptively cancels inversion errors through on-line learning. Such learning is accomplished by a simple weight update rule derived from Lyapunov theory, thus assuring stability of the closed-loop system. In this research, previous results using linear-in-parameters neural networks were reformulated in the context of a more general class of composite nonlinear systems, and the control scheme was shown to possess important similarities and major differences with established methods of adaptive control. The neural-adaptive nonlinear control methodology in question has been used to design an autopilot for an anti-air missile with enhanced agile maneuvering capability, and simulation results indicate that this approach is a feasible one. There are, however, certain difficulties associated with choosing the proper network architecture which make it difficult to achieve the rapid learning required in this application. Accordingly, this technique has been further extended to incorporate the important class of feedforward neural networks with a single hidden layer. These neural networks feature well-known approximation capabilities and provide an effective, although nonlinear, parameterization of the adaptive control problem. Numerical results from a six-degree-of-freedom nonlinear agile anti-air missile simulation demonstrate the effectiveness of the autopilot design based on multilayer networks. Previous work in this area has implicitly assumed precise knowledge of the plant order, and made no allowances for unmodeled dynamics. This thesis describes an approach to the problem of controlling a class of nonlinear systems in the face of both unknown nonlinearities and unmodeled dynamics. The proposed methodology is similar to robust adaptive control techniques derived for control of linear

  3. A Neural Networks-Based Hybrid Routing Protocol for Wireless Mesh Networks

    Directory of Open Access Journals (Sweden)

    Nenad Kojić

    2012-06-01

    Full Text Available The networking infrastructure of wireless mesh networks (WMNs is decentralized and relatively simple, but they can display reliable functioning performance while having good redundancy. WMNs provide Internet access for fixed and mobile wireless devices. Both in urban and rural areas they provide users with high-bandwidth networks over a specific coverage area. The main problems affecting these networks are changes in network topology and link quality. In order to provide regular functioning, the routing protocol has the main influence in WMN implementations. In this paper we suggest a new routing protocol for WMN, based on good results of a proactive and reactive routing protocol, and for that reason it can be classified as a hybrid routing protocol. The proposed solution should avoid flooding and creating the new routing metric. We suggest the use of artificial logic—i.e., neural networks (NNs. This protocol is based on mobile agent technologies controlled by a Hopfield neural network. In addition to this, our new routing metric is based on multicriteria optimization in order to minimize delay and blocking probability (rejected packets or their retransmission. The routing protocol observes real network parameters and real network environments. As a result of artificial logic intelligence, the proposed routing protocol should maximize usage of network resources and optimize network performance.

  4. A neural networks-based hybrid routing protocol for wireless mesh networks.

    Science.gov (United States)

    Kojić, Nenad; Reljin, Irini; Reljin, Branimir

    2012-01-01

    The networking infrastructure of wireless mesh networks (WMNs) is decentralized and relatively simple, but they can display reliable functioning performance while having good redundancy. WMNs provide Internet access for fixed and mobile wireless devices. Both in urban and rural areas they provide users with high-bandwidth networks over a specific coverage area. The main problems affecting these networks are changes in network topology and link quality. In order to provide regular functioning, the routing protocol has the main influence in WMN implementations. In this paper we suggest a new routing protocol for WMN, based on good results of a proactive and reactive routing protocol, and for that reason it can be classified as a hybrid routing protocol. The proposed solution should avoid flooding and creating the new routing metric. We suggest the use of artificial logic-i.e., neural networks (NNs). This protocol is based on mobile agent technologies controlled by a Hopfield neural network. In addition to this, our new routing metric is based on multicriteria optimization in order to minimize delay and blocking probability (rejected packets or their retransmission). The routing protocol observes real network parameters and real network environments. As a result of artificial logic intelligence, the proposed routing protocol should maximize usage of network resources and optimize network performance.

  5. Marginalization in Random Nonlinear Neural Networks

    Science.gov (United States)

    Vasudeva Raju, Rajkumar; Pitkow, Xaq

    2015-03-01

    Computations involved in tasks like causal reasoning in the brain require a type of probabilistic inference known as marginalization. Marginalization corresponds to averaging over irrelevant variables to obtain the probability of the variables of interest. This is a fundamental operation that arises whenever input stimuli depend on several variables, but only some are task-relevant. Animals often exhibit behavior consistent with marginalizing over some variables, but the neural substrate of this computation is unknown. It has been previously shown (Beck et al. 2011) that marginalization can be performed optimally by a deterministic nonlinear network that implements a quadratic interaction of neural activity with divisive normalization. We show that a simpler network can perform essentially the same computation. These Random Nonlinear Networks (RNN) are feedforward networks with one hidden layer, sigmoidal activation functions, and normally-distributed weights connecting the input and hidden layers. We train the output weights connecting the hidden units to an output population, such that the output model accurately represents a desired marginal probability distribution without significant information loss compared to optimal marginalization. Simulations for the case of linear coordinate transformations show that the RNN model has good marginalization performance, except for highly uncertain inputs that have low amplitude population responses. Behavioral experiments, based on these results, could then be used to identify if this model does indeed explain how the brain performs marginalization.

  6. Neural Network Model of memory retrieval

    Directory of Open Access Journals (Sweden)

    Stefano eRecanatesi

    2015-12-01

    Full Text Available Human memory can store large amount of information. Nevertheless, recalling is often achallenging task. In a classical free recall paradigm, where participants are asked to repeat abriefly presented list of words, people make mistakes for lists as short as 5 words. We present amodel for memory retrieval based on a Hopfield neural network where transition between itemsare determined by similarities in their long-term memory representations. Meanfield analysis ofthe model reveals stable states of the network corresponding (1 to single memory representationsand (2 intersection between memory representations. We show that oscillating feedback inhibitionin the presence of noise induces transitions between these states triggering the retrieval ofdifferent memories. The network dynamics qualitatively predicts the distribution of time intervalsrequired to recall new memory items observed in experiments. It shows that items having largernumber of neurons in their representation are statistically easier to recall and reveals possiblebottlenecks in our ability of retrieving memories. Overall, we propose a neural network model ofinformation retrieval broadly compatible with experimental observations and is consistent with ourrecent graphical model (Romani et al., 2013.

  7. Review On Applications Of Neural Network To Computer Vision

    Science.gov (United States)

    Li, Wei; Nasrabadi, Nasser M.

    1989-03-01

    Neural network models have many potential applications to computer vision due to their parallel structures, learnability, implicit representation of domain knowledge, fault tolerance, and ability of handling statistical data. This paper demonstrates the basic principles, typical models and their applications in this field. Variety of neural models, such as associative memory, multilayer back-propagation perceptron, self-stabilized adaptive resonance network, hierarchical structured neocognitron, high order correlator, network with gating control and other models, can be applied to visual signal recognition, reinforcement, recall, stereo vision, motion, object tracking and other vision processes. Most of the algorithms have been simulated on com-puters. Some have been implemented with special hardware. Some systems use features, such as edges and profiles, of images as the data form for input. Other systems use raw data as input signals to the networks. We will present some novel ideas contained in these approaches and provide a comparison of these methods. Some unsolved problems are mentioned, such as extracting the intrinsic properties of the input information, integrating those low level functions to a high-level cognitive system, achieving invariances and other problems. Perspectives of applications of some human vision models and neural network models are analyzed.

  8. Neural networks for beat perception in musical rhythm

    Directory of Open Access Journals (Sweden)

    Edward W Large

    2015-11-01

    Full Text Available Entrainment of cortical rhythms to acoustic rhythms has been hypothesized to be the neural correlate of pulse and meter perception in music. Dynamic attending theory first proposed synchronization of endogenous perceptual rhythms nearly forty years ago, but only recently has the pivotal role of neural synchrony been demonstrated. Significant progress has since been made in understanding the role of neural oscillations and the neural structures that support synchronized responses to musical rhythm. Synchronized neural activity has been observed in auditory and motor networks, and has been linked with attentional allocation and movement coordination. Here we describe a neurodynamic model that shows how self-organization of oscillations in interacting sensory and motor networks could be responsible for the formation of the pulse percept in complex rhythms. We test the model's prediction that pulse can be perceived at a frequency for which no spectral energy is present in the amplitude envelope of the acoustic rhythm. The result provides a theoretical link between oscillatory neurodynamics and the induction of pulse and meter in musical rhythm.

  9. Neural Network Control of Asymmetrical Multilevel Converters

    Directory of Open Access Journals (Sweden)

    Patrice WIRA

    2009-12-01

    Full Text Available This paper proposes a neural implementation of a harmonic eliminationstrategy (HES to control a Uniform Step Asymmetrical Multilevel Inverter(USAMI. The mapping between the modulation rate and the requiredswitching angles is learned and approximated with a Multi-Layer Perceptron(MLP neural network. After learning, appropriate switching angles can bedetermined with the neural network leading to a low-computational-costneural controller which is well suited for real-time applications. Thistechnique can be applied to multilevel inverters with any number of levels. Asan example, a nine-level inverter and an eleven-level inverter are consideredand the optimum switching angles are calculated on-line. Comparisons to thewell-known sinusoidal pulse-width modulation (SPWM have been carriedout in order to evaluate the performance of the proposed approach. Simulationresults demonstrate the technical advantages of the proposed neuralimplementation over the conventional method (SPWM in eliminatingharmonics while controlling a nine-level and eleven-level USAMI. Thisneural approach is applied for the supply of an asynchronous machine andresults show that it ensures a highest quality torque by efficiently cancelingthe harmonics generated by the inverters.

  10. Multi-Layer and Recursive Neural Networks for Metagenomic Classification.

    Science.gov (United States)

    Ditzler, Gregory; Polikar, Robi; Rosen, Gail

    2015-09-01

    Recent advances in machine learning, specifically in deep learning with neural networks, has made a profound impact on fields such as natural language processing, image classification, and language modeling; however, feasibility and potential benefits of the approaches to metagenomic data analysis has been largely under-explored. Deep learning exploits many layers of learning nonlinear feature representations, typically in an unsupervised fashion, and recent results have shown outstanding generalization performance on previously unseen data. Furthermore, some deep learning methods can also represent the structure in a data set. Consequently, deep learning and neural networks may prove to be an appropriate approach for metagenomic data. To determine whether such approaches are indeed appropriate for metagenomics, we experiment with two deep learning methods: i) a deep belief network, and ii) a recursive neural network, the latter of which provides a tree representing the structure of the data. We compare these approaches to the standard multi-layer perceptron, which has been well-established in the machine learning community as a powerful prediction algorithm, though its presence is largely missing in metagenomics literature. We find that traditional neural networks can be quite powerful classifiers on metagenomic data compared to baseline methods, such as random forests. On the other hand, while the deep learning approaches did not result in improvements to the classification accuracy, they do provide the ability to learn hierarchical representations of a data set that standard classification methods do not allow. Our goal in this effort is not to determine the best algorithm in terms accuracy-as that depends on the specific application-but rather to highlight the benefits and drawbacks of each of the approach we discuss and provide insight on how they can be improved for predictive metagenomic analysis.

  11. Influence of neural adaptation on dynamics and equilibrium state of neural activities in a ring neural network

    Science.gov (United States)

    Takiyama, Ken

    2017-12-01

    How neural adaptation affects neural information processing (i.e. the dynamics and equilibrium state of neural activities) is a central question in computational neuroscience. In my previous works, I analytically clarified the dynamics and equilibrium state of neural activities in a ring-type neural network model that is widely used to model the visual cortex, motor cortex, and several other brain regions. The neural dynamics and the equilibrium state in the neural network model corresponded to a Bayesian computation and statistically optimal multiple information integration, respectively, under a biologically inspired condition. These results were revealed in an analytically tractable manner; however, adaptation effects were not considered. Here, I analytically reveal how the dynamics and equilibrium state of neural activities in a ring neural network are influenced by spike-frequency adaptation (SFA). SFA is an adaptation that causes gradual inhibition of neural activity when a sustained stimulus is applied, and the strength of this inhibition depends on neural activities. I reveal that SFA plays three roles: (1) SFA amplifies the influence of external input in neural dynamics; (2) SFA allows the history of the external input to affect neural dynamics; and (3) the equilibrium state corresponds to the statistically optimal multiple information integration independent of the existence of SFA. In addition, the equilibrium state in a ring neural network model corresponds to the statistically optimal integration of multiple information sources under biologically inspired conditions, independent of the existence of SFA.

  12. Flood routing modelling with Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    R. Peters

    2006-01-01

    Full Text Available For the modelling of the flood routing in the lower reaches of the Freiberger Mulde river and its tributaries the one-dimensional hydrodynamic modelling system HEC-RAS has been applied. Furthermore, this model was used to generate a database to train multilayer feedforward networks. To guarantee numerical stability for the hydrodynamic modelling of some 60 km of streamcourse an adequate resolution in space requires very small calculation time steps, which are some two orders of magnitude smaller than the input data resolution. This leads to quite high computation requirements seriously restricting the application – especially when dealing with real time operations such as online flood forecasting. In order to solve this problem we tested the application of Artificial Neural Networks (ANN. First studies show the ability of adequately trained multilayer feedforward networks (MLFN to reproduce the model performance.

  13. Quantum generalisation of feedforward neural networks

    Science.gov (United States)

    Wan, Kwok Ho; Dahlsten, Oscar; Kristjánsson, Hlér; Gardner, Robert; Kim, M. S.

    2017-09-01

    We propose a quantum generalisation of a classical neural network. The classical neurons are firstly rendered reversible by adding ancillary bits. Then they are generalised to being quantum reversible, i.e., unitary (the classical networks we generalise are called feedforward, and have step-function activation functions). The quantum network can be trained efficiently using gradient descent on a cost function to perform quantum generalisations of classical tasks. We demonstrate numerically that it can: (i) compress quantum states onto a minimal number of qubits, creating a quantum autoencoder, and (ii) discover quantum communication protocols such as teleportation. Our general recipe is theoretical and implementation-independent. The quantum neuron module can naturally be implemented photonically.

  14. An Attractor-Based Complexity Measurement for Boolean Recurrent Neural Networks

    Science.gov (United States)

    Cabessa, Jérémie; Villa, Alessandro E. P.

    2014-01-01

    We provide a novel refined attractor-based complexity measurement for Boolean recurrent neural networks that represents an assessment of their computational power in terms of the significance of their attractor dynamics. This complexity measurement is achieved by first proving a computational equivalence between Boolean recurrent neural networks and some specific class of -automata, and then translating the most refined classification of -automata to the Boolean neural network context. As a result, a hierarchical classification of Boolean neural networks based on their attractive dynamics is obtained, thus providing a novel refined attractor-based complexity measurement for Boolean recurrent neural networks. These results provide new theoretical insights to the computational and dynamical capabilities of neural networks according to their attractive potentialities. An application of our findings is illustrated by the analysis of the dynamics of a simplified model of the basal ganglia-thalamocortical network simulated by a Boolean recurrent neural network. This example shows the significance of measuring network complexity, and how our results bear new founding elements for the understanding of the complexity of real brain circuits. PMID:24727866

  15. An attractor-based complexity measurement for Boolean recurrent neural networks.

    Science.gov (United States)

    Cabessa, Jérémie; Villa, Alessandro E P

    2014-01-01

    We provide a novel refined attractor-based complexity measurement for Boolean recurrent neural networks that represents an assessment of their computational power in terms of the significance of their attractor dynamics. This complexity measurement is achieved by first proving a computational equivalence between Boolean recurrent neural networks and some specific class of ω-automata, and then translating the most refined classification of ω-automata to the Boolean neural network context. As a result, a hierarchical classification of Boolean neural networks based on their attractive dynamics is obtained, thus providing a novel refined attractor-based complexity measurement for Boolean recurrent neural networks. These results provide new theoretical insights to the computational and dynamical capabilities of neural networks according to their attractive potentialities. An application of our findings is illustrated by the analysis of the dynamics of a simplified model of the basal ganglia-thalamocortical network simulated by a Boolean recurrent neural network. This example shows the significance of measuring network complexity, and how our results bear new founding elements for the understanding of the complexity of real brain circuits.

  16. Regularized negative correlation learning for neural network ensembles.

    Science.gov (United States)

    Chen, Huanhuan; Yao, Xin

    2009-12-01

    Negative correlation learning (NCL) is a neural network ensemble learning algorithm that introduces a correlation penalty term to the cost function of each individual network so that each neural network minimizes its mean square error (MSE) together with the correlation of the ensemble. This paper analyzes NCL and reveals that the training of NCL (when lambda = 1) corresponds to training the entire ensemble as a single learning machine that only minimizes the MSE without regularization. This analysis explains the reason why NCL is prone to overfitting the noise in the training set. This paper also demonstrates that tuning the correlation parameter lambda in NCL by cross validation cannot overcome the overfitting problem. The paper analyzes this problem and proposes the regularized negative correlation learning (RNCL) algorithm which incorporates an additional regularization term for the whole ensemble. RNCL decomposes the ensemble's training objectives, including MSE and regularization, into a set of sub-objectives, and each sub-objective is implemented by an individual neural network. In this paper, we also provide a Bayesian interpretation for RNCL and provide an automatic algorithm to optimize regularization parameters based on Bayesian inference. The RNCL formulation is applicable to any nonlinear estimator minimizing the MSE. The experiments on synthetic as well as real-world data sets demonstrate that RNCL achieves better performance than NCL, especially when the noise level is nontrivial in the data set.

  17. The Usage of Neural Networks for the Medical Diagnosis

    OpenAIRE

    Malyshevska, Kateryna

    2009-01-01

    The problem of cancer diagnosis from multi-channel images using the neural networks is investigated. The goal of this work is to classify the different tissue types which are used to determine the cancer risk. The radial basis function networks and backpropagation neural networks are used for classification. The results of experiments are presented.

  18. Daily Nigerian peak load forecasting using artificial neural network ...

    African Journals Online (AJOL)

    A daily peak load forecasting technique that uses artificial neural network with seasonal indices is presented in this paper. A neural network of relatively smaller size than the main prediction network is used to predict the daily peak load for a period of one year over which the actual daily load data are available using one ...

  19. Prediction of Parametric Roll Resonance by Multilayer Perceptron Neural Network

    DEFF Research Database (Denmark)

    Míguez González, M; López Peña, F.; Díaz Casás, V.

    2011-01-01

    acknowledged in the last few years. This work proposes a prediction system based on a multilayer perceptron (MP) neural network. The training and testing of the MP network is accomplished by feeding it with simulated data of a three degrees-of-freedom nonlinear model of a fishing vessel. The neural network...

  20. Advances in Artificial Neural Networks - Methodological Development and Application

    Science.gov (United States)

    Artificial neural networks as a major soft-computing technology have been extensively studied and applied during the last three decades. Research on backpropagation training algorithms for multilayer perceptron networks has spurred development of other neural network training algorithms for other ne...

  1. Particle swarm optimization of a neural network model in a ...

    Indian Academy of Sciences (India)

    This paper presents a particle swarm optimization (PSO) technique to train an artificial neural network (ANN) for prediction of flank wear in drilling, and compares the network performance with that of the back propagation neural network (BPNN). This analysis is carried out following a series of experiments employing high ...

  2. Mechatronic Hydraulic Drive with Regulator, Based on Artificial Neural Network

    Science.gov (United States)

    Burennikov, Y.; Kozlov, L.; Pyliavets, V.; Piontkevych, O.

    2017-06-01

    Mechatronic hydraulic drives, based on variable pump, proportional hydraulics and controllers find wide application in technological machines and testing equipment. Mechatronic hydraulic drives provide necessary parameters of actuating elements motion with the possibility of their correction in case of external loads change. This enables to improve the quality of working operations, increase the capacity of machines. The scheme of mechatronic hydraulic drive, based on the pump, hydraulic cylinder, proportional valve with electrohydraulic control and programmable controller is suggested. Algorithm for the control of mechatronic hydraulic drive to provide necessary pressure change law in hydraulic cylinder is developed. For the realization of control algorithm in the controller artificial neural networks are used. Mathematical model of mechatronic hydraulic drive, enabling to create the training base for adjustment of artificial neural networks of the regulator is developed.

  3. Survey on Neural Networks Used for Medical Image Processing.

    Science.gov (United States)

    Shi, Zhenghao; He, Lifeng; Suzuki, Kenji; Nakamura, Tsuyoshi; Itoh, Hidenori

    2009-02-01

    This paper aims to present a review of neural networks used in medical image processing. We classify neural networks by its processing goals and the nature of medical images. Main contributions, advantages, and drawbacks of the methods are mentioned in the paper. Problematic issues of neural network application for medical image processing and an outlook for the future research are also discussed. By this survey, we try to answer the following two important questions: (1) What are the major applications of neural networks in medical image processing now and in the nearby future? (2) What are the major strengths and weakness of applying neural networks for solving medical image processing tasks? We believe that this would be very helpful researchers who are involved in medical image processing with neural network techniques.

  4. Permeability prediction in shale gas reservoirs using Neural Network

    Science.gov (United States)

    Aliouane, Leila; Ouadfeul, Sid-Ali

    2017-04-01

    Here, we suggest the use of the artificial neural network for permeability prediction in shale gas reservoirs using artificial neural network. Prediction of Permeability in shale gas reservoirs is a complicated task that requires new models where Darcy's fluid flow model is not suitable. Proposed idea is based on the training of neural network machine using the set of well-logs data as an input and the measured permeability as an output. In this case the Multilayer Perceptron neural network machines is used with Levenberg Marquardt algorithm. Application to two horizontal wells drilled in the Barnett shale formation exhibit the power of neural network model to resolve such as problem. Keywords: Artificial neural network, permeability, prediction , shale gas.

  5. Financial Time Series Prediction Using Elman Recurrent Random Neural Networks

    Science.gov (United States)

    Wang, Jie; Wang, Jun; Fang, Wen; Niu, Hongli

    2016-01-01

    In recent years, financial market dynamics forecasting has been a focus of economic research. To predict the price indices of stock markets, we developed an architecture which combined Elman recurrent neural networks with stochastic time effective function. By analyzing the proposed model with the linear regression, complexity invariant distance (CID), and multiscale CID (MCID) analysis methods and taking the model compared with different models such as the backpropagation neural network (BPNN), the stochastic time effective neural network (STNN), and the Elman recurrent neural network (ERNN), the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices. PMID:27293423

  6. Neuromodulatory connectivity defines the structure of a behavioral neural network.

    Science.gov (United States)

    Diao, Feici; Elliott, Amicia D; Diao, Fengqiu; Shah, Sarav; White, Benjamin H

    2017-11-22

    Neural networks are typically defined by their synaptic connectivity, yet synaptic wiring diagrams often provide limited insight into network function. This is due partly to the importance of non-synaptic communication by neuromodulators, which can dynamically reconfigure circuit activity to alter its output. Here, we systematically map the patterns of neuromodulatory connectivity in a network that governs a developmentally critical behavioral sequence in Drosophila. This sequence, which mediates pupal ecdysis, is governed by the serial release of several key factors, which act both somatically as hormones and within the brain as neuromodulators. By identifying and characterizing the functions of the neuronal targets of these factors, we find that they define hierarchically organized layers of the network controlling the pupal ecdysis sequence: a modular input layer, an intermediate central pattern generating layer, and a motor output layer. Mapping neuromodulatory connections in this system thus defines the functional architecture of the network.

  7. Experiments on neural network architectures for fuzzy logic

    Science.gov (United States)

    Keller, James M.

    1991-01-01

    The use of fuzzy logic to model and manage uncertainty in a rule-based system places high computational demands on an inference engine. In an earlier paper, the authors introduced a trainable neural network structure for fuzzy logic. These networks can learn and extrapolate complex relationships between possibility distributions for the antecedents and consequents in the rules. Here, the power of these networks is further explored. The insensitivity of the output to noisy input distributions (which are likely if the clauses are generated from real data) is demonstrated as well as the ability of the networks to internalize multiple conjunctive clause and disjunctive clause rules. Since different rules with the same variables can be encoded in a single network, this approach to fuzzy logic inference provides a natural mechanism for rule conflict resolution.

  8. NNETS - NEURAL NETWORK ENVIRONMENT ON A TRANSPUTER SYSTEM

    Science.gov (United States)

    Villarreal, J.

    1994-01-01

    The primary purpose of NNETS (Neural Network Environment on a Transputer System) is to provide users a high degree of flexibility in creating and manipulating a wide variety of neural network topologies at processing speeds not found in conventional computing environments. To accomplish this purpose, NNETS supports back propagation and back propagation related algorithms. The back propagation algorithm used is an implementation of Rumelhart's Generalized Delta Rule. NNETS was developed on the INMOS Transputer. NNETS predefines a Back Propagation Network, a Jordan Network, and a Reinforcement Network to assist users in learning and defining their own networks. The program also allows users to configure other neural network paradigms from the NNETS basic architecture. The Jordan network is basically a feed forward network that has the outputs connected to a pseudo input layer. The state of the network is dependent on the inputs from the environment plus the state of the network. The Reinforcement network learns via a scalar feedback signal called reinforcement. The network propagates forward randomly. The environment looks at the outputs of the network to produce a reinforcement signal that is fed back to the network. NNETS was written for the INMOS C compiler D711B version 1.3 or later (MS-DOS version). A small portion of the software was written in the OCCAM language to perform the communications routing between processors. NNETS is configured to operate on a 4 X 10 array of Transputers in sequence with a Transputer based graphics processor controlled by a master IBM PC 286 (or better) Transputer. A RGB monitor is required which must be capable of 512 X 512 resolution. It must be able to receive red, green, and blue signals via BNC connectors. NNETS is meant for experienced Transputer users only. The program is distributed on 5.25 inch 1.2Mb MS-DOS format diskettes. NNETS was developed in 1991. Transputer and OCCAM are registered trademarks of Inmos Corporation. MS

  9. A note on the complexity of reliability in neural networks.

    Science.gov (United States)

    Berman, P; Parberry, I; Schnitger, G

    1992-01-01

    It is shown that in a standard discrete neural network model with small fan-in, tolerance to random malicious faults can be achieved with a log-linear increase in the number of neurons and a constant factor increase in parallel time, provided fan-in can increase arbitrarily. A similar result is obtained for a nonstandard but closely related model with no restriction on fan-in.

  10. Feedforward Backpropagation Neural Networks in Prediction of Farmer Risk Preferences

    OpenAIRE

    Kastens, Terry L.; Featherstone, Allen M.

    1996-01-01

    An out-of-sample prediction of Kansas farmers' responses to five surveyed questions involving risk is used to compare ordered multinomial logistic regression models with feedforward backpropagation neural network models. Although the logistic models often predict more accurately than the neural network models in a mean-squared error sense, the neural network models are shown to be more accommodating of loss functions associated with a desire to predict certain combinations of categorical resp...

  11. Classification of behavior using unsupervised temporal neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Adair, K.L. [Florida State Univ., Tallahassee, FL (United States). Dept. of Computer Science; Argo, P. [Los Alamos National Lab., NM (United States)

    1998-03-01

    Adding recurrent connections to unsupervised neural networks used for clustering creates a temporal neural network which clusters a sequence of inputs as they appear over time. The model presented combines the Jordan architecture with the unsupervised learning technique Adaptive Resonance Theory, Fuzzy ART. The combination yields a neural network capable of quickly clustering sequential pattern sequences as the sequences are generated. The applicability of the architecture is illustrated through a facility monitoring problem.

  12. Pixel-wise Segmentation of Street with Neural Networks

    OpenAIRE

    Bittel, Sebastian; Kaiser, Vitali; Teichmann, Marvin; Thoma, Martin

    2015-01-01

    Pixel-wise street segmentation of photographs taken from a drivers perspective is important for self-driving cars and can also support other object recognition tasks. A framework called SST was developed to examine the accuracy and execution time of different neural networks. The best neural network achieved an $F_1$-score of 89.5% with a simple feedforward neural network which trained to solve a regression task.

  13. Survey on Neural Networks Used for Medical Image Processing

    OpenAIRE

    Shi, Zhenghao; He, Lifeng; Suzuki, Kenji; Nakamura, Tsuyoshi; Itoh, Hidenori

    2009-01-01

    This paper aims to present a review of neural networks used in medical image processing. We classify neural networks by its processing goals and the nature of medical images. Main contributions, advantages, and drawbacks of the methods are mentioned in the paper. Problematic issues of neural network application for medical image processing and an outlook for the future research are also discussed. By this survey, we try to answer the following two important questions: (1) Wh...

  14. Neural networks analysis on SSME vibration simulation data

    Science.gov (United States)

    Lo, Ching F.; Wu, Kewei

    1993-01-01

    The neural networks method is applied to investigate the feasibility in detecting anomalies in turbopump vibration of SSME to supplement the statistical method utilized in the prototype system. The investigation of neural networks analysis is conducted using SSME vibration data from a NASA developed numerical simulator. The limited application of neural networks to the HPFTP has also shown the effectiveness in diagnosing the anomalies of turbopump vibrations.

  15. A Neural Network-Based Interval Pattern Matcher

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2015-07-01

    Full Text Available One of the most important roles in the machine learning area is to classify, and neural networks are very important classifiers. However, traditional neural networks cannot identify intervals, let alone classify them. To improve their identification ability, we propose a neural network-based interval matcher in our paper. After summarizing the theoretical construction of the model, we take a simple and a practical weather forecasting experiment, which show that the recognizer accuracy reaches 100% and that is promising.

  16. Discrete Orthogonal Transforms and Neural Networks for Image Interpolation

    Directory of Open Access Journals (Sweden)

    J. Polec

    1999-09-01

    Full Text Available In this contribution we present transform and neural network approaches to the interpolation of images. From transform point of view, the principles from [1] are modified for 1st and 2nd order interpolation. We present several new interpolation discrete orthogonal transforms. From neural network point of view, we present interpolation possibilities of multilayer perceptrons. We use various configurations of neural networks for 1st and 2nd order interpolation. The results are compared by means of tables.

  17. Neural Networks for Modeling and Control of Particle Accelerators

    CERN Document Server

    Edelen, A.L.; Chase, B.E.; Edstrom, D.; Milton, S.V.; Stabile, P.

    2016-01-01

    We describe some of the challenges of particle accelerator control, highlight recent advances in neural network techniques, discuss some promising avenues for incorporating neural networks into particle accelerator control systems, and describe a neural network-based control system that is being developed for resonance control of an RF electron gun at the Fermilab Accelerator Science and Technology (FAST) facility, including initial experimental results from a benchmark controller.

  18. Training product unit neural networks with genetic algorithms

    Science.gov (United States)

    Janson, D. J.; Frenzel, J. F.; Thelen, D. C.

    1991-01-01

    The training of product neural networks using genetic algorithms is discussed. Two unusual neural network techniques are combined; product units are employed instead of the traditional summing units and genetic algorithms train the network rather than backpropagation. As an example, a neural netork is trained to calculate the optimum width of transistors in a CMOS switch. It is shown how local minima affect the performance of a genetic algorithm, and one method of overcoming this is presented.

  19. Wave transmission prediction of multilayer floating breakwater using neural network

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Patil, S.G.; Hegde, A.V.

    in unison to solve a specific problem. The network learns through examples, so it requires good examples to train properly and further a trained network model can be used for prediction purpose. Proceedings of ICOE 2009 Wave transmission... prediction of multilayer floating breakwater using neural network 577 In order to allow the network to learn both non-linear and linear relationships between input nodes and output nodes, multiple-layer neural networks are often used...

  20. Parameterizing Stellar Spectra Using Deep Neural Networks

    Science.gov (United States)

    Li, Xiang-Ru; Pan, Ru-Yang; Duan, Fu-Qing

    2017-03-01

    Large-scale sky surveys are observing massive amounts of stellar spectra. The large number of stellar spectra makes it necessary to automatically parameterize spectral data, which in turn helps in statistically exploring properties related to the atmospheric parameters. This work focuses on designing an automatic scheme to estimate effective temperature ({T}{eff}), surface gravity ({log}g) and metallicity [Fe/H] from stellar spectra. A scheme based on three deep neural networks (DNNs) is proposed. This scheme consists of the following three procedures: first, the configuration of a DNN is initialized using a series of autoencoder neural networks; second, the DNN is fine-tuned using a gradient descent scheme; third, three atmospheric parameters {T}{eff}, {log}g and [Fe/H] are estimated using the computed DNNs. The constructed DNN is a neural network with six layers (one input layer, one output layer and four hidden layers), for which the number of nodes in the six layers are 3821, 1000, 500, 100, 30 and 1, respectively. This proposed scheme was tested on both real spectra and theoretical spectra from Kurucz’s new opacity distribution function models. Test errors are measured with mean absolute errors (MAEs). The errors on real spectra from the Sloan Digital Sky Survey (SDSS) are 0.1477, 0.0048 and 0.1129 dex for {log}g, {log}{T}{eff} and [Fe/H] (64.85 K for {T}{eff}), respectively. Regarding theoretical spectra from Kurucz’s new opacity distribution function models, the MAE of the test errors are 0.0182, 0.0011 and 0.0112 dex for {log}g, {log}{T}{eff} and [Fe/H] (14.90 K for {T}{eff}), respectively.

  1. Precipitation Nowcast using Deep Recurrent Neural Network

    Science.gov (United States)

    Akbari Asanjan, A.; Yang, T.; Gao, X.; Hsu, K. L.; Sorooshian, S.

    2016-12-01

    An accurate precipitation nowcast (0-6 hours) with a fine temporal and spatial resolution has always been an important prerequisite for flood warning, streamflow prediction and risk management. Most of the popular approaches used for forecasting precipitation can be categorized into two groups. One type of precipitation forecast relies on numerical modeling of the physical dynamics of atmosphere and another is based on empirical and statistical regression models derived by local hydrologists or meteorologists. Given the recent advances in artificial intelligence, in this study a powerful Deep Recurrent Neural Network, termed as Long Short-Term Memory (LSTM) model, is creatively used to extract the patterns and forecast the spatial and temporal variability of Cloud Top Brightness Temperature (CTBT) observed from GOES satellite. Then, a 0-6 hours precipitation nowcast is produced using a Precipitation Estimation from Remote Sensing Information using Artificial Neural Network (PERSIANN) algorithm, in which the CTBT nowcast is used as the PERSIANN algorithm's raw inputs. Two case studies over the continental U.S. have been conducted that demonstrate the improvement of proposed approach as compared to a classical Feed Forward Neural Network and a couple simple regression models. The advantages and disadvantages of the proposed method are summarized with regard to its capability of pattern recognition through time, handling of vanishing gradient during model learning, and working with sparse data. The studies show that the LSTM model performs better than other methods, and it is able to learn the temporal evolution of the precipitation events through over 1000 time lags. The uniqueness of PERSIANN's algorithm enables an alternative precipitation nowcast approach as demonstrated in this study, in which the CTBT prediction is produced and used as the inputs for generating precipitation nowcast.

  2. Advances in Artificial Neural Networks – Methodological Development and Application

    Directory of Open Access Journals (Sweden)

    Yanbo Huang

    2009-08-01

    Full Text Available Artificial neural networks as a major soft-computing technology have been extensively studied and applied during the last three decades. Research on backpropagation training algorithms for multilayer perceptron networks has spurred development of other neural network training algorithms for other networks such as radial basis function, recurrent network, feedback network, and unsupervised Kohonen self-organizing network. These networks, especially the multilayer perceptron network with a backpropagation training algorithm, have gained recognition in research and applications in various scientific and engineering areas. In order to accelerate the training process and overcome data over-fitting, research has been conducted to improve the backpropagation algorithm. Further, artificial neural networks have been integrated with other advanced methods such as fuzzy logic and wavelet analysis, to enhance the ability of data interpretation and modeling and to avoid subjectivity in the operation of the training algorithm. In recent years, support vector machines have emerged as a set of high-performance supervised generalized linear classifiers in parallel with artificial neural networks. A review on development history of artificial neural networks is presented and the standard architectures and algorithms of artificial neural networks are described. Furthermore, advanced artificial neural networks will be introduced with support vector machines, and limitations of ANNs will be identified. The future of artificial neural network development in tandem with support vector machines will be discussed in conjunction with further applications to food science and engineering, soil and water relationship for crop management, and decision support for precision agriculture. Along with the network structures and training algorithms, the applications of artificial neural networks will be reviewed as well, especially in the fields of agricultural and biological

  3. Robustness of the ATLAS pixel clustering neural network algorithm

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00407780; The ATLAS collaboration

    2016-01-01

    Proton-proton collisions at the energy frontier puts strong constraints on track reconstruction algorithms. In the ATLAS track reconstruction algorithm, an artificial neural network is utilised to identify and split clusters of neighbouring read-out elements in the ATLAS pixel detector created by multiple charged particles. The robustness of the neural network algorithm is presented, probing its sensitivity to uncertainties in the detector conditions. The robustness is studied by evaluating the stability of the algorithm's performance under a range of variations in the inputs to the neural networks. Within reasonable variation magnitudes, the neural networks prove to be robust to most variation types.

  4. Optical-Correlator Neural Network Based On Neocognitron

    Science.gov (United States)

    Chao, Tien-Hsin; Stoner, William W.

    1994-01-01

    Multichannel optical correlator implements shift-invariant, high-discrimination pattern-recognizing neural network based on paradigm of neocognitron. Selected as basic building block of this neural network because invariance under shifts is inherent advantage of Fourier optics included in optical correlators in general. Neocognitron is conceptual electronic neural-network model for recognition of visual patterns. Multilayer processing achieved by iteratively feeding back output of feature correlator to input spatial light modulator and updating Fourier filters. Neural network trained by use of characteristic features extracted from target images. Multichannel implementation enables parallel processing of large number of selected features.

  5. Material procedure quality forecast based on genetic BP neural network

    Science.gov (United States)

    Zheng, Bao-Hua

    2017-07-01

    Material procedure quality forecast plays an important role in quality control. This paper proposes a prediction model based on genetic algorithm (GA) and back propagation (BP) neural network. It can obtain the initial weights and thresholds of optimized BP neural network with the GA global search ability. A material process quality prediction model with the optimized BP neural network is adopted to predict the error of future process to measure the accuracy of process quality. The results show that the proposed method has the advantages of high accuracy and fast convergence rate compared with BP neural network.

  6. Neural network models: Insights and prescriptions from practical applications

    Energy Technology Data Exchange (ETDEWEB)

    Samad, T. [Honeywell Technology Center, Minneapolis, MN (United States)

    1995-12-31

    Neural networks are no longer just a research topic; numerous applications are now testament to their practical utility. In the course of developing these applications, researchers and practitioners have been faced with a variety of issues. This paper briefly discusses several of these, noting in particular the rich connections between neural networks and other, more conventional technologies. A more comprehensive version of this paper is under preparation that will include illustrations on real examples. Neural networks are being applied in several different ways. Our focus here is on neural networks as modeling technology. However, much of the discussion is also relevant to other types of applications such as classification, control, and optimization.

  7. Power converters and AC electrical drives with linear neural networks

    CERN Document Server

    Cirrincione, Maurizio

    2012-01-01

    The first book of its kind, Power Converters and AC Electrical Drives with Linear Neural Networks systematically explores the application of neural networks in the field of power electronics, with particular emphasis on the sensorless control of AC drives. It presents the classical theory based on space-vectors in identification, discusses control of electrical drives and power converters, and examines improvements that can be attained when using linear neural networks. The book integrates power electronics and electrical drives with artificial neural networks (ANN). Organized into four parts,

  8. Liquefaction Microzonation of Babol City Using Artificial Neural Network

    DEFF Research Database (Denmark)

    Farrokhzad, F.; Choobbasti, A.J.; Barari, Amin

    2012-01-01

    that will be less susceptible to damage during earthquakes. The scope of present study is to prepare the liquefaction microzonation map for the Babol city based on Seed and Idriss (1983) method using artificial neural network. Artificial neural network (ANN) is one of the artificial intelligence (AI) approaches...... is proposed in this paper. To meet this objective, an effort is made to introduce a total of 30 boreholes data in an area of 7 km2 which includes the results of field tests into the neural network model and the prediction of artificial neural network is checked in some test boreholes, finally the liquefaction...

  9. A hardware implementation of neural network with modified HANNIBAL architecture

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Bum youb; Chung, Duck Jin [Inha University, Inchon (Korea, Republic of)

    1996-03-01

    A digital hardware architecture for artificial neural network with learning capability is described in this paper. It is a modified hardware architecture known as HANNIBAL(Hardware Architecture for Neural Networks Implementing Back propagation Algorithm Learning). For implementing an efficient neural network hardware, we analyzed various type of multiplier which is major function block of neuro-processor cell. With this result, we design a efficient digital neural network hardware using serial/parallel multiplier, and test the operation. We also analyze the hardware efficiency with logic level simulation. (author). 14 refs., 10 figs., 3 tabs.

  10. Neural network and its application to CT imaging

    Energy Technology Data Exchange (ETDEWEB)

    Nikravesh, M.; Kovscek, A.R.; Patzek, T.W. [Lawrence Berkeley National Lab., CA (United States)] [and others

    1997-02-01

    We present an integrated approach to imaging the progress of air displacement by spontaneous imbibition of oil into sandstone. We combine Computerized Tomography (CT) scanning and neural network image processing. The main aspects of our approach are (I) visualization of the distribution of oil and air saturation by CT, (II) interpretation of CT scans using neural networks, and (III) reconstruction of 3-D images of oil saturation from the CT scans with a neural network model. Excellent agreement between the actual images and the neural network predictions is found.

  11. Ocean wave forecasting using recurrent neural networks

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Prabaharan, N.

    to the biological neurons, works on the input and output passing through a hidden layer. The ANN used here is a data- oriented modeling technique to find relations between input and output patterns by self learning and without any fixed mathematical form assumed... = 1/p ? Ep (2) Where, Ep = ? ? (Tk ?Ok)2 (3) p is the total number of training patterns; Tk is the actual output and Ok is the predicted output at kth output node. In the learning process of backpropagation neural network...

  12. Convolution neural networks for ship type recognition

    Science.gov (United States)

    Rainey, Katie; Reeder, John D.; Corelli, Alexander G.

    2016-05-01

    Algorithms to automatically recognize ship type from satellite imagery are desired for numerous maritime applications. This task is difficult, and example imagery accurately labeled with ship type is hard to obtain. Convolutional neural networks (CNNs) have shown promise in image recognition settings, but many of these applications rely on the availability of thousands of example images for training. This work attempts to under- stand for which types of ship recognition tasks CNNs might be well suited. We report the results of baseline experiments applying a CNN to several ship type classification tasks, and discuss many of the considerations that must be made in approaching this problem.

  13. Artificial Neural Network applied to lightning flashes

    Science.gov (United States)

    Gin, R. B.; Guedes, D.; Bianchi, R.

    2013-05-01

    The development of video cameras enabled cientists to study lightning discharges comportment with more precision. The main goal of this project is to create a system able to detect images of lightning discharges stored in videos and classify them using an Artificial Neural Network (ANN)using C Language and OpenCV libraries. The developed system, can be split in two different modules: detection module and classification module. The detection module uses OpenCV`s computer vision libraries and image processing techniques to detect if there are significant differences between frames in a sequence, indicating that something, still not classified, occurred. Whenever there is a significant difference between two consecutive frames, two main algorithms are used to analyze the frame image: brightness and shape algorithms. These algorithms detect both shape and brightness of the event, removing irrelevant events like birds, as well as detecting the relevant events exact position, allowing the system to track it over time. The classification module uses a neural network to classify the relevant events as horizontal or vertical lightning, save the event`s images and calculates his number of discharges. The Neural Network was implemented using the backpropagation algorithm, and was trained with 42 training images , containing 57 lightning events (one image can have more than one lightning). TheANN was tested with one to five hidden layers, with up to 50 neurons each. The best configuration achieved a success rate of 95%, with one layer containing 20 neurons (33 test images with 42 events were used in this phase). This configuration was implemented in the developed system to analyze 20 video files, containing 63 lightning discharges previously manually detected. Results showed that all the lightning discharges were detected, many irrelevant events were unconsidered, and the event's number of discharges was correctly computed. The neural network used in this project achieved a

  14. Defect detection on videos using neural network

    Directory of Open Access Journals (Sweden)

    Sizyakin Roman

    2017-01-01

    Full Text Available In this paper, we consider a method for defects detection in a video sequence, which consists of three main steps; frame compensation, preprocessing by a detector, which is base on the ranking of pixel values, and the classification of all pixels having anomalous values using convolutional neural networks. The effectiveness of the proposed method shown in comparison with the known techniques on several frames of the video sequence with damaged in natural conditions. The analysis of the obtained results indicates the high efficiency of the proposed method. The additional use of machine learning as postprocessing significantly reduce the likelihood of false alarm.

  15. NNSYSID and NNCTRL Tools for system identification and control with neural networks

    DEFF Research Database (Denmark)

    Nørgaard, Magnus; Ravn, Ole; Poulsen, Niels Kjølstad

    2001-01-01

    a number of nonlinear model structures based on neural networks, effective training algorithms and tools for model validation and model structure selection. The NNCTRL toolkit is an add-on to NNSYSID and provides tools for design and simulation of control systems based on neural networks. The user can......Two toolsets for use with MATLAB have been developed: the neural network based system identification toolbox (NNSYSID) and the neural network based control system design toolkit (NNCTRL). The NNSYSID toolbox has been designed to assist identification of nonlinear dynamic systems. It contains...... choose among several designs such as direct inverse control, internal model control, nonlinear feedforward, feedback linearisation, optimal control, gain scheduling based on instantaneous linearisation of neural network models and nonlinear model predictive control. This article gives an overview...

  16. NNSYSID and NNCTRL Tools for system identification and control with neural networks

    DEFF Research Database (Denmark)

    Nørgaard, Magnus; Ravn, Ole; Poulsen, Niels Kjølstad

    2001-01-01

    choose among several designs such as direct inverse control, internal model control, nonlinear feedforward, feedback linearisation, optimal control, gain scheduling based on instantaneous linearisation of neural network models and nonlinear model predictive control. This article gives an overview......Two toolsets for use with MATLAB have been developed: the neural network based system identification toolbox (NNSYSID) and the neural network based control system design toolkit (NNCTRL). The NNSYSID toolbox has been designed to assist identification of nonlinear dynamic systems. It contains...... a number of nonlinear model structures based on neural networks, effective training algorithms and tools for model validation and model structure selection. The NNCTRL toolkit is an add-on to NNSYSID and provides tools for design and simulation of control systems based on neural networks. The user can...

  17. Cultured Neural Networks: Optimization of Patterned Network Adhesiveness and Characterization of their Neural Activity

    Directory of Open Access Journals (Sweden)

    W. L. C. Rutten

    2006-01-01

    Full Text Available One type of future, improved neural interface is the “cultured probe”. It is a hybrid type of neural information transducer or prosthesis, for stimulation and/or recording of neural activity. It would consist of a microelectrode array (MEA on a planar substrate, each electrode being covered and surrounded by a local circularly confined network (“island” of cultured neurons. The main purpose of the local networks is that they act as biofriendly intermediates for collateral sprouts from the in vivo system, thus allowing for an effective and selective neuron–electrode interface. As a secondary purpose, one may envisage future information processing applications of these intermediary networks. In this paper, first, progress is shown on how substrates can be chemically modified to confine developing networks, cultured from dissociated rat cortex cells, to “islands” surrounding an electrode site. Additional coating of neurophobic, polyimide-coated substrate by triblock-copolymer coating enhances neurophilic-neurophobic adhesion contrast. Secondly, results are given on neuronal activity in patterned, unconnected and connected, circular “island” networks. For connected islands, the larger the island diameter (50, 100 or 150 μm, the more spontaneous activity is seen. Also, activity may show a very high degree of synchronization between two islands. For unconnected islands, activity may start at 22 days in vitro (DIV, which is two weeks later than in unpatterned networks.

  18. Exponential stabilization and synchronization for fuzzy model of memristive neural networks by periodically intermittent control.

    Science.gov (United States)

    Yang, Shiju; Li, Chuandong; Huang, Tingwen

    2016-03-01

    The problem of exponential stabilization and synchronization for fuzzy model of memristive neural networks (MNNs) is investigated by using periodically intermittent control in this paper. Based on the knowledge of memristor and recurrent neural network, the model of MNNs is formulated. Some novel and useful stabilization criteria and synchronization conditions are then derived by using the Lyapunov functional and differential inequality techniques. It is worth noting that the methods used in this paper are also applied to fuzzy model for complex networks and general neural networks. Numerical simulations are also provided to verify the effectiveness of theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Sonar discrimination of cylinders from different angles using neural networks neural networks

    DEFF Research Database (Denmark)

    Andersen, Lars Nonboe; Au, Whiwlow; Larsen, Jan

    1999-01-01

    This paper describes an underwater object discrimination system applied to recognize cylinders of various compositions from different angles. The system is based on a new combination of simulated dolphin clicks, simulated auditory filters and artificial neural networks. The model demonstrates its...

  20. A Hybrid Spectral Clustering and Deep Neural Network Ensemble Algorithm for Intrusion Detection in Sensor Networks

    Directory of Open Access Journals (Sweden)

    Tao Ma

    2016-10-01

    Full Text Available The development of intrusion detection systems (IDS that are adapted to allow routers and network defence systems to detect malicious network traffic disguised as network protocols or normal access is a critical challenge. This paper proposes a novel approach called SCDNN, which combines spectral clustering (SC and deep neural network (DNN algorithms. First, the dataset is divided into k subsets based on sample similarity using cluster centres, as in SC. Next, the distance between data points in a testing set and the training set is measured based on similarity features and is fed into the deep neural network algorithm for intrusion detection. Six KDD-Cup99 and NSL-KDD datasets and a sensor network dataset were employed to test the performance of the model. These experimental results indicate that the SCDNN classifier not only performs better than backpropagation neural network (BPNN, support vector machine (SVM, random forest (RF and Bayes tree models in detection accuracy and the types of abnormal attacks found. It also provides an effective tool of study and analysis of intrusion detection in large networks.

  1. A Hybrid Spectral Clustering and Deep Neural Network Ensemble Algorithm for Intrusion Detection in Sensor Networks.

    Science.gov (United States)

    Ma, Tao; Wang, Fen; Cheng, Jianjun; Yu, Yang; Chen, Xiaoyun

    2016-10-13

    The development of intrusion detection systems (IDS) that are adapted to allow routers and network defence systems to detect malicious network traffic disguised as network protocols or normal access is a critical challenge. This paper proposes a novel approach called SCDNN, which combines spectral clustering (SC) and deep neural network (DNN) algorithms. First, the dataset is divided into k subsets based on sample similarity using cluster centres, as in SC. Next, the distance between data points in a testing set and the training set is measured based on similarity features and is fed into the deep neural network algorithm for intrusion detection. Six KDD-Cup99 and NSL-KDD datasets and a sensor network dataset were employed to test the performance of the model. These experimental results indicate that the SCDNN classifier not only performs better than backpropagation neural network (BPNN), support vector machine (SVM), random forest (RF) and Bayes tree models in detection accuracy and the types of abnormal attacks found. It also provides an effective tool of study and analysis of intrusion detection in large networks.

  2. Characterization of Early Cortical Neural Network ...

    Science.gov (United States)

    We examined the development of neural network activity using microelectrode array (MEA) recordings made in multi-well MEA plates (mwMEAs) over the first 12 days in vitro (DIV). In primary cortical cultures made from postnatal rats, action potential spiking activity was essentially absent on DIV 2 and developed rapidly between DIV 5 and 12. Spiking activity was primarily sporadic and unorganized at early DIV, and became progressively more organized with time in culture, with bursting parameters, synchrony and network bursting increasing between DIV 5 and 12. We selected 12 features to describe network activity and principal components analysis using these features demonstrated a general segregation of data by age at both the well and plate levels. Using a combination of random forest classifiers and Support Vector Machines, we demonstrated that 4 features (CV of within burst ISI, CV of IBI, network spike rate and burst rate) were sufficient to predict the age (either DIV 5, 7, 9 or 12) of each well recording with >65% accuracy. When restricting the classification problem to a binary decision, we found that classification improved dramatically, e.g. 95% accuracy for discriminating DIV 5 vs DIV 12 wells. Further, we present a novel resampling approach to determine the number of wells that might be needed for conducting comparisons of different treatments using mwMEA plates. Overall, these results demonstrate that network development on mwMEA plates is similar to

  3. Stable architectures for deep neural networks

    Science.gov (United States)

    Haber, Eldad; Ruthotto, Lars

    2018-01-01

    Deep neural networks have become invaluable tools for supervised machine learning, e.g. classification of text or images. While often offering superior results over traditional techniques and successfully expressing complicated patterns in data, deep architectures are known to be challenging to design and train such that they generalize well to new data. Critical issues with deep architectures are numerical instabilities in derivative-based learning algorithms commonly called exploding or vanishing gradients. In this paper, we propose new forward propagation techniques inspired by systems of ordinary differential equations (ODE) that overcome this challenge and lead to well-posed learning problems for arbitrarily deep networks. The backbone of our approach is our interpretation of deep learning as a parameter estimation problem of nonlinear dynamical systems. Given this formulation, we analyze stability and well-posedness of deep learning and use this new understanding to develop new network architectures. We relate the exploding and vanishing gradient phenomenon to the stability of the discrete ODE and present several strategies for stabilizing deep learning for very deep networks. While our new architectures restrict the solution space, several numerical experiments show their competitiveness with state-of-the-art networks.

  4. Phase diagram of spiking neural networks.

    Science.gov (United States)

    Seyed-Allaei, Hamed

    2015-01-01

    In computer simulations of spiking neural networks, often it is assumed that every two neurons of the network are connected by a probability of 2%, 20% of neurons are inhibitory and 80% are excitatory. These common values are based on experiments, observations, and trials and errors, but here, I take a different perspective, inspired by evolution, I systematically simulate many networks, each with a different set of parameters, and then I try to figure out what makes the common values desirable. I stimulate networks with pulses and then measure their: dynamic range, dominant frequency of population activities, total duration of activities, maximum rate of population and the occurrence time of maximum rate. The results are organized in phase diagram. This phase diagram gives an insight into the space of parameters - excitatory to inhibitory ratio, sparseness of connections and synaptic weights. This phase diagram can be used to decide the parameters of a model. The phase diagrams show that networks which are configured according to the common values, have a good dynamic range in response to an impulse and their dynamic range is robust in respect to synaptic weights, and for some synaptic weights they oscillates in α or β frequencies, independent of external stimuli.

  5. An efficient neural network approach to dynamic robot motion planning.

    Science.gov (United States)

    Yang, S X; Meng, M

    2000-03-01

    In this paper, a biologically inspired neural network approach to real-time collision-free motion planning of mobile robots or robot manipulators in a nonstationary environment is proposed. Each neuron in the topologically organized neural network has only local connections, whose neural dynamics is characterized by a shunting equation. Thus the computational complexity linearly depends on the neural network size. The real-time robot motion is planned through the dynamic activity landscape of the neural network without any prior knowledge of the dynamic environment, without explicitly searching over the free workspace or the collision paths, and without any learning procedures. Therefore it is computationally efficient. The global stability of the neural network is guaranteed by qualitative analysis and the Lyapunov stability theory. The effectiveness and efficiency of the proposed approach are demonstrated through simulation studies.

  6. The relevance of network micro-structure for neural dynamics

    Directory of Open Access Journals (Sweden)

    Volker ePernice

    2013-06-01

    Full Text Available The activity of cortical neurons is determined by the input they receive from presynaptic neurons. Many previousstudies have investigated how specific aspects of the statistics of the input affect the spike trains of single neurons and neuronsin recurrent networks. However, typically very simple random network models are considered in such studies. Here weuse a recently developed algorithm to construct networks based on a quasi-fractal probability measure which are much morevariable than commonly used network models, and which therefore promise to sample the space of recurrent networks ina more exhaustive fashion than previously possible. We use the generated graphs as the underlying network topology insimulations of networks of integrate-and-fire neurons in an asynchronous and irregular state. Based on an extensive datasetof networks and neuronal simulations we assess statistical relations between features of the network structure and the spikingactivity. Our results highlight the strong influence that some details of the network structure have on the activity dynamics ofboth single neurons and populations, even if some global network parameters are kept fixed. We observe specific and consistentrelations between activity characteristics like spike-train irregularity or correlations and network properties, for example thedistributions of the numbers of in- and outgoing connections or clustering. Exploiting these relations, we demonstrate that itis possible to estimate structural characteristics of the network from activity data. We also assess higher order correlationsof spiking activity in the various networks considered here, and find that their occurrence strongly depends on the networkstructure. These results provide directions for further theoretical studies on recurrent networks, as well as new ways to interpretspike train recordings from neural circuits.

  7. Communication: Fitting potential energy surfaces with fundamental invariant neural network

    Science.gov (United States)

    Shao, Kejie; Chen, Jun; Zhao, Zhiqiang; Zhang, Dong H.

    2016-08-01

    A more flexible neural network (NN) method using the fundamental invariants (FIs) as the input vector is proposed in the construction of potential energy surfaces for molecular systems involving identical atoms. Mathematically, FIs finitely generate the permutation invariant polynomial (PIP) ring. In combination with NN, fundamental invariant neural network (FI-NN) can approximate any function to arbitrary accuracy. Because FI-NN minimizes the size of input permutation invariant polynomials, it can efficiently reduce the evaluation time of potential energy, in particular for polyatomic systems. In this work, we provide the FIs for all possible molecular systems up to five atoms. Potential energy surfaces for OH3 and CH4 were constructed with FI-NN, with the accuracy confirmed by full-dimensional quantum dynamic scattering and bound state calculations.

  8. Programmable synaptic chip for electronic neural networks

    Science.gov (United States)

    Moopenn, A.; Langenbacher, H.; Thakoor, A. P.; Khanna, S. K.

    1988-01-01

    A binary synaptic matrix chip has been developed for electronic neural networks. The matrix chip contains a programmable 32X32 array of 'long channel' NMOSFET binary connection elements implemented in a 3-micron bulk CMOS process. Since the neurons are kept off-chip, the synaptic chip serves as a 'cascadable' building block for a multi-chip synaptic network as large as 512X512 in size. As an alternative to the programmable NMOSFET (long channel) connection elements, tailored thin film resistors are deposited, in series with FET switches, on some CMOS test chips, to obtain the weak synaptic connections. Although deposition and patterning of the resistors require additional processing steps, they promise substantial savings in silicon area. The performance of synaptic chip in a 32-neuron breadboard system in an associative memory test application is discussed.

  9. Dynamics of macro- and microscopic neural networks

    DEFF Research Database (Denmark)

    Mikkelsen, Kaare

    2014-01-01

    GN), which is a class of signals with a non-trivial low-frequency component. It is assumed that certain characteristica about the low-frequency component can yield information about the neural processes behind the signal. The method has been used in a range of different studies over the course of the past 10...... that the method continues to find use, of which examples are presented. In the second part of the thesis, numerical simulations of networks of neurons are described. To simplify the analysis, a relatively simpled neuron model - Leaky Integrate and Fire - is chosen. The strengths of the connections between...... shown that the syncronizing effect of the plasticity disappears when the strengths of the connections are frozen in time. Subsequently, the so-called ``Sisyphus'' mechanism is discussed, which is shown to cause slow fluctuations in the both the network synchronization and the strengths...

  10. A Convolutional Neural Network Neutrino Event Classifier

    CERN Document Server

    Aurisano, A; Rocco, D; Himmel, A; Messier, M D; Niner, E; Pawloski, G; Psihas, F; Sousa, A; Vahle, P

    2016-01-01

    Convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology without the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.

  11. Infant Joint Attention, Neural Networks and Social Cognition

    Science.gov (United States)

    Mundy, Peter; Jarrold, William

    2010-01-01

    Neural network models of attention can provide a unifying approach to the study of human cognitive and emotional development (Posner & Rothbart, 2007). This paper we argue that a neural networks approach to the infant development of joint attention can inform our understanding of the nature of human social learning, symbolic thought process and social cognition. At its most basic, joint attention involves the capacity to coordinate one’s own visual attention with that of another person. We propose that joint attention development involves increments in the capacity to engage in simultaneous or parallel processing of information about one’s own attention and the attention of other people. Infant practice with joint attention is both a consequence and organizer of the development of a distributed and integrated brain network involving frontal and parietal cortical systems. This executive distributed network first serves to regulate the capacity of infants to respond to and direct the overt behavior of other people in order to share experience with others through the social coordination of visual attention. In this paper we describe this parallel and distributed neural network model of joint attention development and discuss two hypotheses that stem from this model. One is that activation of this distributed network during coordinated attention enhances to depth of information processing and encoding beginning in the first year of life. We also propose that with development joint attention becomes internalized as the capacity to socially coordinate mental attention to internal representations. As this occurs the executive joint attention network makes vital contributions to the development of human symbolic thinking and social cognition. PMID:20884172

  12. Brain tumor segmentation with Deep Neural Networks.

    Science.gov (United States)

    Havaei, Mohammad; Davy, Axel; Warde-Farley, David; Biard, Antoine; Courville, Aaron; Bengio, Yoshua; Pal, Chris; Jodoin, Pierre-Marc; Larochelle, Hugo

    2017-01-01

    In this paper, we present a fully automatic brain tumor segmentation method based on Deep Neural Networks (DNNs). The proposed networks are tailored to glioblastomas (both low and high grade) pictured in MR images. By their very nature, these tumors can appear anywhere in the brain and have almost any kind of shape, size, and contrast. These reasons motivate our exploration of a machine learning solution that exploits a flexible, high capacity DNN while being extremely efficient. Here, we give a description of different model choices that we've found to be necessary for obtaining competitive performance. We explore in particular different architectures based on Convolutional Neural Networks (CNN), i.e. DNNs specifically adapted to image data. We present a novel CNN architecture which differs from those traditionally used in computer vision. Our CNN exploits both local features as well as more global contextual features simultaneously. Also, different from most traditional uses of CNNs, our networks use a final layer that is a convolutional implementation of a fully connected layer which allows a 40 fold speed up. We also describe a 2-phase training procedure that allows us to tackle difficulties related to the imbalance of tumor labels. Finally, we explore a cascade architecture in which the output of a basic CNN is treated as an additional source of information for a subsequent CNN. Results reported on the 2013 BRATS test data-set reveal that our architecture improves over the currently published state-of-the-art while being over 30 times faster. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Artificial neural networks in pancreatic disease.

    Science.gov (United States)

    Bartosch-Härlid, A; Andersson, B; Aho, U; Nilsson, J; Andersson, R

    2008-07-01

    An artificial neural network (ANNs) is a non-linear pattern recognition technique that is rapidly gaining in popularity in medical decision-making. This study investigated the use of ANNs for diagnostic and prognostic purposes in pancreatic disease, especially acute pancreatitis and pancreatic cancer. PubMed was searched for articles on the use of ANNs in pancreatic diseases using the MeSH terms 'neural networks (computer)', 'pancreatic neoplasms', 'pancreatitis' and 'pancreatic diseases'. A systematic review of the articles was performed. Eleven articles were identified, published between 1993 and 2007. The situations that lend themselves best to analysis by ANNs are complex multifactorial relationships, medical decisions when a second opinion is needed and when automated interpretation is required, for example in a situation of an inadequate number of experts. Conventional linear models have limitations in terms of diagnosis and prediction of outcome in acute pancreatitis and pancreatic cancer. Management of these disorders can be improved by applying ANNs to existing clinical parameters and newly established gene expression profiles. (c) 2008 British Journal of Surgery Society Ltd. Published by John Wiley & Sons, Ltd.

  14. BOUNDARY DEPTH INFORMATION USING HOPFIELD NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    S. Xu

    2016-06-01

    Full Text Available Depth information is widely used for representation, reconstruction and modeling of 3D scene. Generally two kinds of methods can obtain the depth information. One is to use the distance cues from the depth camera, but the results heavily depend on the device, and the accuracy is degraded greatly when the distance from the object is increased. The other one uses the binocular cues from the matching to obtain the depth information. It is more and more mature and convenient to collect the depth information of different scenes by stereo matching methods. In the objective function, the data term is to ensure that the difference between the matched pixels is small, and the smoothness term is to smooth the neighbors with different disparities. Nonetheless, the smoothness term blurs the boundary depth information of the object which becomes the bottleneck of the stereo matching. This paper proposes a novel energy function for the boundary to keep the discontinuities and uses the Hopfield neural network to solve the optimization. We first extract the region of interest areas which are the boundary pixels in original images. Then, we develop the boundary energy function to calculate the matching cost. At last, we solve the optimization globally by the Hopfield neural network. The Middlebury stereo benchmark is used to test the proposed method, and results show that our boundary depth information is more accurate than other state-of-the-art methods and can be used to optimize the results of other stereo matching methods.

  15. Neural network analysis for hazardous waste characterization

    Energy Technology Data Exchange (ETDEWEB)

    Misra, M.; Pratt, L.Y.; Farris, C. [Colorado School of Mines, Golden, CO (United States)] [and others

    1995-12-31

    This paper is a summary of our work in developing a system for interpreting electromagnetic (EM) and magnetic sensor information from the dig face characterization experimental cell at INEL to determine the depth and nature of buried objects. This project contained three primary components: (1) development and evaluation of several geophysical interpolation schemes for correcting missing or noisy data, (2) development and evaluation of several wavelet compression schemes for removing redundancies from the data, and (3) construction of two neural networks that used the results of steps (1) and (2) to determine the depth and nature of buried objects. This work is a proof-of-concept study that demonstrates the feasibility of this approach. The resulting system was able to determine the nature of buried objects correctly 87% of the time and was able to locate a buried object to within an average error of 0.8 feet. These statistics were gathered based on a large test set and so can be considered reliable. Considering the limited nature of this study, these results strongly indicate the feasibility of this approach, and the importance of appropriate preprocessing of neural network input data.

  16. Object Classification Using Substance Based Neural Network

    Directory of Open Access Journals (Sweden)

    P. Sengottuvelan

    2014-01-01

    Full Text Available Object recognition has shown tremendous increase in the field of image analysis. The required set of image objects is identified and retrieved on the basis of object recognition. In this paper, we propose a novel classification technique called substance based image classification (SIC using a wavelet neural network. The foremost task of SIC is to remove the surrounding regions from an image to reduce the misclassified portion and to effectively reflect the shape of an object. At first, the image to be extracted is performed with SIC system through the segmentation of the image. Next, in order to attain more accurate information, with the extracted set of regions, the wavelet transform is applied for extracting the configured set of features. Finally, using the neural network classifier model, misclassification over the given natural images and further background images are removed from the given natural image using the LSEG segmentation. Moreover, to increase the accuracy of object classification, SIC system involves the removal of the regions in the surrounding image. Performance evaluation reveals that the proposed SIC system reduces the occurrence of misclassification and reflects the exact shape of an object to approximately 10–15%.

  17. Energy coding in neural network with inhibitory neurons.

    Science.gov (United States)

    Wang, Ziyin; Wang, Rubin; Fang, Ruiyan

    2015-04-01

    This paper aimed at assessing and comparing the effects of the inhibitory neurons in the neural network on the neural energy distribution, and the network activities in the absence of the inhibitory neurons to understand the nature of neural energy distribution and neural energy coding. Stimulus, synchronous oscillation has significant difference between neural networks with and without inhibitory neurons, and this difference can be quantitatively evaluated by the characteristic energy distribution. In addition, the synchronous oscillation difference of the neural activity can be quantitatively described by change of the energy distribution if the network parameters are gradually adjusted. Compared with traditional method of correlation coefficient analysis, the quantitative indicators based on nervous energy distribution characteristics are more effective in reflecting the dynamic features of the neural network activities. Meanwhile, this neural coding method from a global perspective of neural activity effectively avoids the current defects of neural encoding and decoding theory and enormous difficulties encountered. Our studies have shown that neural energy coding is a new coding theory with high efficiency and great potential.

  18. Learning of N-layers neural network

    Directory of Open Access Journals (Sweden)

    Vladimír Konečný

    2005-01-01

    Full Text Available In the last decade we can observe increasing number of applications based on the Artificial Intelligence that are designed to solve problems from different areas of human activity. The reason why there is so much interest in these technologies is that the classical way of solutions does not exist or these technologies are not suitable because of their robustness. They are often used in applications like Business Intelligence that enable to obtain useful information for high-quality decision-making and to increase competitive advantage.One of the most widespread tools for the Artificial Intelligence are the artificial neural networks. Their high advantage is relative simplicity and the possibility of self-learning based on set of pattern situations.For the learning phase is the most commonly used algorithm back-propagation error (BPE. The base of BPE is the method minima of error function representing the sum of squared errors on outputs of neural net, for all patterns of the learning set. However, while performing BPE and in the first usage, we can find out that it is necessary to complete the handling of the learning factor by suitable method. The stability of the learning process and the rate of convergence depend on the selected method. In the article there are derived two functions: one function for the learning process management by the relative great error function value and the second function when the value of error function approximates to global minimum.The aim of the article is to introduce the BPE algorithm in compact matrix form for multilayer neural networks, the derivation of the learning factor handling method and the presentation of the results.

  19. Performance of artificial neural networks and genetical evolved artificial neural networks unfolding techniques

    Energy Technology Data Exchange (ETDEWEB)

    Ortiz R, J. M. [Escuela Politecnica Superior, Departamento de Electrotecnia y Electronica, Avda. Menendez Pidal s/n, Cordoba (Spain); Martinez B, M. R.; Vega C, H. R. [Universidad Autonoma de Zacatecas, Unidad Academica de Estudios Nucleares, Calle Cipres No. 10, Fracc. La Penuela, 98068 Zacatecas (Mexico); Gallego D, E.; Lorente F, A. [Universidad Politecnica de Madrid, Departamento de Ingenieria Nuclear, ETSI Industriales, C. Jose Gutierrez Abascal 2, 28006 Madrid (Spain); Mendez V, R.; Los Arcos M, J. M.; Guerrero A, J. E., E-mail: morvymm@yahoo.com.m [CIEMAT, Laboratorio de Metrologia de Radiaciones Ionizantes, Avda. Complutense 22, 28040 Madrid (Spain)

    2011-02-15

    With the Bonner spheres spectrometer neutron spectrum is obtained through an unfolding procedure. Monte Carlo methods, Regularization, Parametrization, Least-squares, and Maximum Entropy are some of the techniques utilized for unfolding. In the last decade methods based on Artificial Intelligence Technology have been used. Approaches based on Genetic Algorithms and Artificial Neural Networks (Ann) have been developed in order to overcome the drawbacks of previous techniques. Nevertheless the advantages of Ann still it has some drawbacks mainly in the design process of the network, vg the optimum selection of the architectural and learning Ann parameters. In recent years the use of hybrid technologies, combining Ann and genetic algorithms, has been utilized to. In this work, several Ann topologies were trained and tested using Ann and Genetically Evolved Artificial Neural Networks in the aim to unfold neutron spectra using the count rates of a Bonner sphere spectrometer. Here, a comparative study of both procedures has been carried out. (Author)

  20. Semantic segmentation of bioimages using convolutional neural networks

    CSIR Research Space (South Africa)

    Wiehman, S

    2016-07-01

    Full Text Available Convolutional neural networks have shown great promise in both general image segmentation problems as well as bioimage segmentation. In this paper, the application of different convolutional network architectures is explored on the C. elegans live...

  1. Artificial neural networks with an infinite number of nodes

    Science.gov (United States)

    Blekas, K.; Lagaris, I. E.

    2017-10-01

    A new class of Artificial Neural Networks is described incorporating a node density function and functional weights. This network containing an infinite number of nodes, excels in generalizing and possesses a superior extrapolation capability.

  2. Altered Synchronizations among Neural Networks in Geriatric Depression.

    Science.gov (United States)

    Wang, Lihong; Chou, Ying-Hui; Potter, Guy G; Steffens, David C

    2015-01-01

    Although major depression has been considered as a manifestation of discoordinated activity between affective and cognitive neural networks, only a few studies have examined the relationships among neural networks directly. Because of the known disconnection theory, geriatric depression could be a useful model in studying the interactions among different networks. In the present study, using independent component analysis to identify intrinsically connected neural networks, we investigated the alterations in synchronizations among neural networks in geriatric depression to better understand the underlying neural mechanisms. Resting-state fMRI data was collected from thirty-two patients with geriatric depression and thirty-two age-matched never-depressed controls. We compared the resting-state activities between the two groups in the default-mode, central executive, attention, salience, and affective networks as well as correlations among these networks. The depression group showed stronger activity than the controls in an affective network, specifically within the orbitofrontal region. However, unlike the never-depressed controls, geriatric depression group lacked synchronized/antisynchronized activity between the affective network and the other networks. Those depressed patients with lower executive function has greater synchronization between the salience network with the executive and affective networks. Our results demonstrate the effectiveness of the between-network analyses in examining neural models for geriatric depression.

  3. Artificial neural networks for processing fluorescence spectroscopy data in skin cancer diagnostics

    Science.gov (United States)

    Lenhardt, L.; Zeković, I.; Dramićanin, T.; Dramićanin, M. D.

    2013-11-01

    Over the years various optical spectroscopic techniques have been widely used as diagnostic tools in the discrimination of many types of malignant diseases. Recently, synchronous fluorescent spectroscopy (SFS) coupled with chemometrics has been applied in cancer diagnostics. The SFS method involves simultaneous scanning of both emission and excitation wavelengths while keeping the interval of wavelengths (constant-wavelength mode) or frequencies (constant-energy mode) between them constant. This method is fast, relatively inexpensive, sensitive and non-invasive. Total synchronous fluorescence spectra of normal skin, nevus and melanoma samples were used as input for training of artificial neural networks. Two different types of artificial neural networks were trained, the self-organizing map and the feed-forward neural network. Histopathology results of investigated skin samples were used as the gold standard for network output. Based on the obtained classification success rate of neural networks, we concluded that both networks provided high sensitivity with classification errors between 2 and 4%.

  4. Use of genetic algorithms for encoding efficient neural network architectures: neurocomputer implementation

    Science.gov (United States)

    James, Jason; Dagli, Cihan H.

    1995-04-01

    In this study an attempt is being made to encode the architecture of a neural network in a chromosome string for evolving robust, fast-learning, minimal neural network architectures through genetic algorithms. Various attributes affecting the learning of the network are represented as genes. The performance of the networks is used as the fitness value. Neural network architecture design concepts are initially demonstrated using a backpropagation architecture with the standard data set of Rosenberg and Sejnowski for text to speech conversion on Adaptive Solutions Inc.'s CNAPS Neuro-Computer. The architectures obtained are compared with the one reported in the literature for the standard data set used. The study concludes by providing some insights regarding the architecture encoding for other artificial neural network paradigms.

  5. Gait features analysis using artificial neural networks - testing the footwear effect.

    Science.gov (United States)

    Wang, Jikun; Zielińska, Teresa

    2017-01-01

    The aim of this paper is to provide the methods for automatic detection of the difference in gait features depending on a footwear. Artificial neural networks were applied in the study. The gait data were recorded during the walk with different footwear for testing and validation of the proposed method. The gait properties were analyzed considering EMG (electromyography) signals and using two types of artificial neural networks: the learning vector quantization (LVQ) classifying network, and the clustering competitive network. Obtained classification and clustering results were discussed. For comparative studies, velocities of the leg joint trajectories, and accelerations were used. The features indicated by neural networks were compared with the conclusions formulated analyzing the above mentioned trajectories for ankle and knee joints. The matching between experimentally recorded joint trajectories and the results given by neural networks was studied. It was indicated what muscles are most influenced by the footwear, the relation between the footwear type and the muscles work was concluded.

  6. Automated Modeling of Microwave Structures by Enhanced Neural Networks

    Directory of Open Access Journals (Sweden)

    Z. Raida

    2006-12-01

    Full Text Available The paper describes the methodology of the automated creation of neural models of microwave structures. During the creation process, artificial neural networks are trained using the combination of the particle swarm optimization and the quasi-Newton method to avoid critical training problems of the conventional neural nets. In the paper, neural networks are used to approximate the behavior of a planar microwave filter (moment method, Zeland IE3D. In order to evaluate the efficiency of neural modeling, global optimizations are performed using numerical models and neural ones. Both approaches are compared from the viewpoint of CPU-time demands and the accuracy. Considering conclusions, methodological recommendations for including neural networks to the microwave design are formulated.

  7. Comparison of Back propagation neural network and Back propagation neural network Based Particle Swarm intelligence in Diagnostic Breast Cancer

    Directory of Open Access Journals (Sweden)

    Farahnaz SADOUGHI

    2014-03-01

    Full Text Available Breast cancer is the most commonly diagnosed cancer and the most common cause of death in women all over the world. Use of computer technology supporting breast cancer diagnosing is now widespread and pervasive across a broad range of medical areas. Early diagnosis of this disease can greatly enhance the chances of long-term survival of breast cancer victims. Artificial Neural Networks (ANN as mainly method play important role in early diagnoses breast cancer. This paper studies Levenberg Marquardet Backpropagation (LMBP neural network and Levenberg Marquardet Backpropagation based Particle Swarm Optimization(LMBP-PSO for the diagnosis of breast cancer. The obtained results show that LMBP and LMBP based PSO system provides higher classification efficiency. But LMBP based PSO needs minimum training and testing time. It helps in developing Medical Decision System (MDS for breast cancer diagnosing. It can also be used as secondary observer in clinical decision making.

  8. Quantum Entanglement in Neural Network States

    Science.gov (United States)

    Deng, Dong-Ling; Li, Xiaopeng; Das Sarma, S.

    2017-04-01

    Machine learning, one of today's most rapidly growing interdisciplinary fields, promises an unprecedented perspective for solving intricate quantum many-body problems. Understanding the physical aspects of the representative artificial neural-network states has recently become highly desirable in the applications of machine-learning techniques to quantum many-body physics. In this paper, we explore the data structures that encode the physical features in the network states by studying the quantum entanglement properties, with a focus on the restricted-Boltzmann-machine (RBM) architecture. We prove that the entanglement entropy of all short-range RBM states satisfies an area law for arbitrary dimensions and bipartition geometry. For long-range RBM states, we show by using an exact construction that such states could exhibit volume-law entanglement, implying a notable capability of RBM in representing quantum states with massive entanglement. Strikingly, the neural-network representation for these states is remarkably efficient, in the sense that the number of nonzero parameters scales only linearly with the system size. We further examine the entanglement properties of generic RBM states by randomly sampling the weight parameters of the RBM. We find that their averaged entanglement entropy obeys volume-law scaling, and the meantime strongly deviates from the Page entropy of the completely random pure states. We show that their entanglement spectrum has no universal part associated with random matrix theory and bears a Poisson-type level statistics. Using reinforcement learning, we demonstrate that RBM is capable of finding the ground state (with power-law entanglement) of a model Hamiltonian with a long-range interaction. In addition, we show, through a concrete example of the one-dimensional symmetry-protected topological cluster states, that the RBM representation may also be used as a tool to analytically compute the entanglement spectrum. Our results uncover the

  9. Quantum Entanglement in Neural Network States

    Directory of Open Access Journals (Sweden)

    Dong-Ling Deng

    2017-05-01

    Full Text Available Machine learning, one of today’s most rapidly growing interdisciplinary fields, promises an unprecedented perspective for solving intricate quantum many-body problems. Understanding the physical aspects of the representative artificial neural-network states has recently become highly desirable in the applications of machine-learning techniques to quantum many-body physics. In this paper, we explore the data structures that encode the physical features in the network states by studying the quantum entanglement properties, with a focus on the restricted-Boltzmann-machine (RBM architecture. We prove that the entanglement entropy of all short-range RBM states satisfies an area law for arbitrary dimensions and bipartition geometry. For long-range RBM states, we show by using an exact construction that such states could exhibit volume-law entanglement, implying a notable capability of RBM in representing quantum states with massive entanglement. Strikingly, the neural-network representation for these states is remarkably efficient, in the sense that the number of nonzero parameters scales only linearly with the system size. We further examine the entanglement properties of generic RBM states by randomly sampling the weight parameters of the RBM. We find that their averaged entanglement entropy obeys volume-law scaling, and the meantime strongly deviates from the Page entropy of the completely random pure states. We show that their entanglement spectrum has no universal part associated with random matrix theory and bears a Poisson-type level statistics. Using reinforcement learning, we demonstrate that RBM is capable of finding the ground state (with power-law entanglement of a model Hamiltonian with a long-range interaction. In addition, we show, through a concrete example of the one-dimensional symmetry-protected topological cluster states, that the RBM representation may also be used as a tool to analytically compute the entanglement spectrum. Our

  10. Modeling of methane emissions using artificial neural network approach

    Directory of Open Access Journals (Sweden)

    Stamenković Lidija J.

    2015-01-01

    Full Text Available The aim of this study was to develop a model for forecasting CH4 emissions at the national level, using Artificial Neural Networks (ANN with broadly available sustainability, economical and industrial indicators as their inputs. ANN modeling was performed using two different types of architecture; a Backpropagation Neural Network (BPNN and a General Regression Neural Network (GRNN. A conventional multiple linear regression (MLR model was also developed in order to compare model performance and assess which model provides the best results. ANN and MLR models were developed and tested using the same annual data for 20 European countries. The ANN model demonstrated very good performance, significantly better than the MLR model. It was shown that a forecast of CH4 emissions at the national level using the ANN model can be made successfully and accurately for a future period of up to two years, thereby opening the possibility to apply such a modeling technique which can be used to support the implementation of sustainable development strategies and environmental management policies. [Projekat Ministarstva nauke Republike Srbije, br. 172007

  11. Eye tracking using artificial neural networks for human computer interaction.

    Science.gov (United States)

    Demjén, E; Aboši, V; Tomori, Z

    2011-01-01

    This paper describes an ongoing project that has the aim to develop a low cost application to replace a computer mouse for people with physical impairment. The application is based on an eye tracking algorithm and assumes that the camera and the head position are fixed. Color tracking and template matching methods are used for pupil detection. Calibration is provided by neural networks as well as by parametric interpolation methods. Neural networks use back-propagation for learning and bipolar sigmoid function is chosen as the activation function. The user's eye is scanned with a simple web camera with backlight compensation which is attached to a head fixation device. Neural networks significantly outperform parametric interpolation techniques: 1) the calibration procedure is faster as they require less calibration marks and 2) cursor control is more precise. The system in its current stage of development is able to distinguish regions at least on the level of desktop icons. The main limitation of the proposed method is the lack of head-pose invariance and its relative sensitivity to illumination (especially to incidental pupil reflections).

  12. Implementation of neural network for color properties of polycarbonates

    Science.gov (United States)

    Saeed, U.; Ahmad, S.; Alsadi, J.; Ross, D.; Rizvi, G.

    2014-05-01

    In present paper, the applicability of artificial neural networks (ANN) is investigated for color properties of plastics. The neural networks toolbox of Matlab 6.5 is used to develop and test the ANN model on a personal computer. An optimal design is completed for 10, 12, 14,16,18 & 20 hidden neurons on single hidden layer with five different algorithms: batch gradient descent (GD), batch variable learning rate (GDX), resilient back-propagation (RP), scaled conjugate gradient (SCG), levenberg-marquardt (LM) in the feed forward back-propagation neural network model. The training data for ANN is obtained from experimental measurements. There were twenty two inputs including resins, additives & pigments while three tristimulus color values L*, a* and b* were used as output layer. Statistical analysis in terms of Root-Mean-Squared (RMS), absolute fraction of variance (R squared), as well as mean square error is used to investigate the performance of ANN. LM algorithm with fourteen neurons on hidden layer in Feed Forward Back-Propagation of ANN model has shown best result in the present study. The degree of accuracy of the ANN model in reduction of errors is proven acceptable in all statistical analysis and shown in results. However, it was concluded that ANN provides a feasible method in error reduction in specific color tristimulus values.

  13. Optical Neural Network Models Applied To Logic Program Execution

    Science.gov (United States)

    Stormon, Charles D.

    1988-05-01

    Logic programming is being used extensively by Artificial Intelligence researchers to solve problems including natural language processing and expert systems. These languages, of which Prolog is the most widely used, promise to revolutionize software engineering, but much greater performance is needed. Researchers have demonstrated the applicability of neural network models to the solution of certain NP-complete problems, but these methods are not obviously applicable to the execution of logic programs. This paper outlines the use of neural networks in four aspects of the logic program execution cycle, and discusses results of a simulation of three of these. Four neural network functional units are described, called the substitution agent, the clause filter, the structure processor, and the heuristics generator, respectively. Simulation results suggest that the system described may provide several orders of magnitude improvement in execution speed for large logic programs. However, practical implementation of the proposed architecture will require the application of optical computing techniques due to the large number of neurons required, and the need for massive, adaptive connectivity.

  14. Precipitation Estimation from Remotely Sensed Data Using Deep Neural Network

    Science.gov (United States)

    Tao, Y.; Gao, X.; Hsu, K. L.; Sorooshian, S.; Ihler, A.

    2015-12-01

    This research develops a precipitation estimation system from remote sensed data using state-of-the-art machine learning algorithms. Compared to ground-based precipitation measurements, satellite-based precipitation estimation products have advantages of temporal resolution and spatial coverage. Also, the massive amount of satellite data contains various measures related to precipitation formation and development. On the other hand, deep learning algorithms were newly developed in the area of machine learning, which was a breakthrough to deal with large and complex dataset, especially to image data. Here, we attempt to engage deep learning techniques to provide hourly precipitation estimation from satellite information, such as long wave infrared data. The brightness temperature data from infrared data is considered to contain cloud information. Radar stage IV dataset is used as ground measurement for parameter calibration. Stacked denoising auto-encoders (SDAE) is applied here to build a 4-layer neural network with 1000 hidden nodes for each hidden layer. SDAE involves two major steps: (1) greedily pre-training each layer as a denoising auto-encoder using the outputs of previous trained hidden layer output starting from visible layer to initialize parameters; (2) fine-tuning the whole deep neural network with supervised criteria. The results are compared with satellite precipitation product PERSIANN-CCS (Precipitation Estimation from Remotely Sensed Imagery using an Artificial Neural Network Cloud Classification System). Based on the results, we have several valuable conclusions: By properly training the neural network, it is able to extract useful information for precipitation estimation. For example, it can reduce the mean squared error of the precipitation by 58% for the summer season in the central United States of the validation period. The SDAE method captures the shape of the precipitation from the cloud shape better compared to the CCS product. Design of

  15. Rod-Shaped Neural Units for Aligned 3D Neural Network Connection.

    Science.gov (United States)

    Kato-Negishi, Midori; Onoe, Hiroaki; Ito, Akane; Takeuchi, Shoji

    2017-08-01

    This paper proposes neural tissue units with aligned nerve fibers (called rod-shaped neural units) that connect neural networks with aligned neurons. To make the proposed units, 3D fiber-shaped neural tissues covered with a calcium alginate hydrogel layer are prepared with a microfluidic system and are cut in an accurate and reproducible manner. These units have aligned nerve fibers inside the hydrogel layer and connectable points on both ends. By connecting the units with a poly(dimethylsiloxane) guide, 3D neural tissues can be constructed and maintained for more than two weeks of culture. In addition, neural networks can be formed between the different neural units via synaptic connections. Experimental results indicate that the proposed rod-shaped neural units are effective tools for the construction of spatially complex connections with aligned nerve fibers in vitro. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Multiple image sensor data fusion through artificial neural networks

    Science.gov (United States)

    With multisensor data fusion technology, the data from multiple sensors are fused in order to make a more accurate estimation of the environment through measurement, processing and analysis. Artificial neural networks are the computational models that mimic biological neural networks. With high per...

  17. Behaviour in O of the Neural Networks Training Cost

    DEFF Research Database (Denmark)

    Goutte, Cyril

    1998-01-01

    We study the behaviour in zero of the derivatives of the cost function used when training non-linear neural networks. It is shown that a fair number offirst, second and higher order derivatives vanish in zero, validating the belief that 0 is a peculiar and potentially harmful location....... These calculations arerelated to practical and theoretical aspects of neural networks training....

  18. Neural network model to control an experimental chaotic pendulum

    NARCIS (Netherlands)

    Bakker, R; Schouten, JC; Takens, F; vandenBleek, CM

    1996-01-01

    A feedforward neural network was trained to predict the motion of an experimental, driven, and damped pendulum operating in a chaotic regime. The network learned the behavior of the pendulum from a time series of the pendulum's angle, the single measured variable. The validity of the neural

  19. Classification of Urinary Calculi using Feed-Forward Neural Networks

    African Journals Online (AJOL)

    In this work the results of classification of these types of calculi (using their infrared spectra in the region 1450–450 cm–1) by feed-forward neural networks are presented. Genetic algorithms were used for optimization of neural networks and for selection of the spectral regions most suitable for classification purposes.

  20. Optimal Brain Surgeon on Artificial Neural Networks in

    DEFF Research Database (Denmark)

    Christiansen, Niels Hørbye; Job, Jonas Hultmann; Klyver, Katrine

    2012-01-01

    It is shown how the procedure know as optimal brain surgeon can be used to trim and optimize artificial neural networks in nonlinear structural dynamics. Beside optimizing the neural network, and thereby minimizing computational cost in simulation, the surgery procedure can also serve as a quick...

  1. Neural networks as a tool for unit commitment

    DEFF Research Database (Denmark)

    Rønne-Hansen, Peter; Rønne-Hansen, Jan

    1991-01-01

    Some of the fundamental problems when solving the power system unit commitment problem by means of neural networks have been attacked. It has been demonstrated for a small example that neural networks might be a viable alternative. Some of the major problems solved in this initiating phase form...

  2. Identification of Non-Linear Structures using Recurrent Neural Networks

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Nielsen, Søren R. K.; Hansen, H. I.

    1995-01-01

    Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure.......Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure....

  3. Classes of feedforward neural networks and their circuit complexity

    NARCIS (Netherlands)

    Shawe-Taylor, John S.; Anthony, Martin H.G.; Kern, Walter

    1992-01-01

    This paper aims to place neural networks in the context of boolean circuit complexity. We define appropriate classes of feedforward neural networks with specified fan-in, accuracy of computation and depth and using techniques of communication complexity proceed to show that the classes fit into a

  4. Identification of Non-Linear Structures using Recurrent Neural Networks

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Nielsen, Søren R. K.; Hansen, H. I.

    Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure.......Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure....

  5. Boosted jet identification using particle candidates and deep neural networks

    CERN Document Server

    CMS Collaboration

    2017-01-01

    This note presents developments for the identification of hadronically decaying top quarks using deep neural networks in CMS. A new method that utilizes one dimensional convolutional neural networks based on jet constituent particles is proposed. Alternative methods using boosted decision trees based on jet observables are compared. The new method shows significant improvement in performance.

  6. Mapping Neural Network Derived from the Parzen Window Estimator

    DEFF Research Database (Denmark)

    Schiøler, Henrik; Hartmann, U.

    1992-01-01

    The article presents a general theoretical basis for the construction of mapping neural networks. The theory is based on the Parzen Window estimator for......The article presents a general theoretical basis for the construction of mapping neural networks. The theory is based on the Parzen Window estimator for...

  7. Implementation of neural network based non-linear predictive control

    DEFF Research Database (Denmark)

    Sørensen, Paul Haase; Nørgård, Peter Magnus; Ravn, Ole

    1999-01-01

    of non-linear systems. GPC is model based and in this paper we propose the use of a neural network for the modeling of the system. Based on the neural network model, a controller with extended control horizon is developed and the implementation issues are discussed, with particular emphasis...

  8. Neural Network for Optimization of Existing Control Systems

    DEFF Research Database (Denmark)

    Madsen, Per Printz

    1995-01-01

    The purpose of this paper is to develop methods to use Neural Network based Controllers (NNC) as an optimization tool for existing control systems.......The purpose of this paper is to develop methods to use Neural Network based Controllers (NNC) as an optimization tool for existing control systems....

  9. Using Neural Networks to Predict MBA Student Success

    Science.gov (United States)

    Naik, Bijayananda; Ragothaman, Srinivasan

    2004-01-01

    Predicting MBA student performance for admission decisions is crucial for educational institutions. This paper evaluates the ability of three different models--neural networks, logit, and probit to predict MBA student performance in graduate programs. The neural network technique was used to classify applicants into successful and marginal student…

  10. Artificial Neural Network Modeling of an Inverse Fluidized Bed ...

    African Journals Online (AJOL)

    MICHAEL

    modeling of the inverse fluidized bed reactor. In the proposed model, the trained neural network represents the kinetics of biological decomposition of pollutants in the reactor. The neural network has been trained with experimental data obtained from an inverse fluidized bed reactor treating the starch industry wastewater.

  11. The harmonics detection method based on neural network applied ...

    African Journals Online (AJOL)

    user

    Consequently, many structures based on artificial neural network (ANN) have been developed in the literature, The most significant ... Keywords: Artificial Neural Networks (ANN), p-q theory, (SAPF), Harmonics, Total Harmonic Distortion. 1. ..... and pure shunt active fitters, IEEE 38th Conf on Industry Applications, Vol. 2, pp.

  12. Application of Neural Networks to House Pricing and Bond Rating

    NARCIS (Netherlands)

    Daniëls, H.A.M.; Kamp, B.; Verkooijen, W.J.H.

    1997-01-01

    Feed forward neural networks receive a growing attention as a data modelling tool in economic classification problems. It is well-known that controlling the design of a neural network can be cumbersome. Inaccuracies may lead to a manifold of problems in the application such as higher errors due to

  13. Neural Networks for Language Identification: A Comparative Study.

    Science.gov (United States)

    MacNamara, Shane; Cunningham, Padraig; Byrne, John

    1998-01-01

    Analyzes a neural network for its ability to perform a task involving identification of the language entries in a 19th-century library catalog containing entries in 14 different languages. Compares the neural network's performance with that of trigrams and a suffix/morphology analysis; the trigrams prove to be superior. (AEF)

  14. Multilayer perceptron neural network for downscaling rainfall in arid ...

    Indian Academy of Sciences (India)

    Multilayer perceptron neural network for downscaling rainfall in arid region: A case study of Baluchistan, Pakistan ... A multilayer perceptron (MLP) neural network has been proposed in the present study for the downscaling of rainfall in the data scarce arid region of Baluchistan province of Pakistan, which is considered as ...

  15. Application of a neural network for reflectance spectrum classification

    Science.gov (United States)

    Yang, Gefei; Gartley, Michael

    2017-05-01

    Traditional reflectance spectrum classification algorithms are based on comparing spectrum across the electromagnetic spectrum anywhere from the ultra-violet to the thermal infrared regions. These methods analyze reflectance on a pixel by pixel basis. Inspired by high performance that Convolution Neural Networks (CNN) have demonstrated in image classification, we applied a neural network to analyze directional reflectance pattern images. By using the bidirectional reflectance distribution function (BRDF) data, we can reformulate the 4-dimensional into 2 dimensions, namely incident direction × reflected direction × channels. Meanwhile, RIT's micro-DIRSIG model is utilized to simulate additional training samples for improving the robustness of the neural networks training. Unlike traditional classification by using hand-designed feature extraction with a trainable classifier, neural networks create several layers to learn a feature hierarchy from pixels to classifier and all layers are trained jointly. Hence, the our approach of utilizing the angular features are different to traditional methods utilizing spatial features. Although training processing typically has a large computational cost, simple classifiers work well when subsequently using neural network generated features. Currently, most popular neural networks such as VGG, GoogLeNet and AlexNet are trained based on RGB spatial image data. Our approach aims to build a directional reflectance spectrum based neural network to help us to understand from another perspective. At the end of this paper, we compare the difference among several classifiers and analyze the trade-off among neural networks parameters.

  16. Artificial neural networks in predicting current in electric arc furnaces

    Science.gov (United States)

    Panoiu, M.; Panoiu, C.; Iordan, A.; Ghiormez, L.

    2014-03-01

    The paper presents a study of the possibility of using artificial neural networks for the prediction of the current and the voltage of Electric Arc Furnaces. Multi-layer perceptron and radial based functions Artificial Neural Networks implemented in Matlab were used. The study is based on measured data items from an Electric Arc Furnace in an industrial plant in Romania.

  17. Parameter estimation of an aeroelastic aircraft using neural networks

    Indian Academy of Sciences (India)

    e-mail: scr@iitk.ac.in. Abstract. Application of neural networks to the problem of aerodynamic modelling and parameter estimation for aeroelastic aircraft is addressed. A neural model capable of ... of the network in terms of the number of neurons in the hidden layer, the learning rate, the momentum rate etc. is not an exact ...

  18. Artificial neural networks for prediction of percentage of water ...

    Indian Academy of Sciences (India)

    According to these input parameters, in the neural networks model, the percentage of water absorption of each specimen was predicted. The training and testing results in the neural networks model have shown a strong potential for predicting the percentage of water absorption of the geopolymer specimens.

  19. Comparative performance of some popular artificial neural network ...

    Indian Academy of Sciences (India)

    Comparative performance of some popular artificial neural network algorithms on benchmark and function approximation problems ... dynamic range of these functions, it is suggested that these functions can also be considered as standard benchmark problems for function approximation using artificial neural networks.

  20. Modelling electric trains energy consumption using Neural Networks

    Energy Technology Data Exchange (ETDEWEB)

    Martinez Fernandez, P.; Garcia Roman, C.; Insa Franco, R.

    2016-07-01

    Nowadays there is an evident concern regarding the efficiency and sustainability of the transport sector due to both the threat of climate change and the current financial crisis. This concern explains the growth of railways over the last years as they present an inherent efficiency compared to other transport means. However, in order to further expand their role, it is necessary to optimise their energy consumption so as to increase their competitiveness. Improving railways energy efficiency requires both reliable data and modelling tools that will allow the study of different variables and alternatives. With this need in mind, this paper presents the development of consumption models based on neural networks that calculate the energy consumption of electric trains. These networks have been trained based on an extensive set of consumption data measured in line 1 of the Valencia Metro Network. Once trained, the neural networks provide a reliable estimation of the vehicles consumption along a specific route when fed with input data such as train speed, acceleration or track longitudinal slope. These networks represent a useful modelling tool that may allow a deeper study of railway lines in terms of energy expenditure with the objective of reducing the costs and environmental impact associated to railways. (Author)

  1. LMI Conditions for Global Stability of Fractional-Order Neural Networks.

    Science.gov (United States)

    Zhang, Shuo; Yu, Yongguang; Yu, Junzhi

    2017-10-01

    Fractional-order neural networks play a vital role in modeling the information processing of neuronal interactions. It is still an open and necessary topic for fractional-order neural networks to investigate their global stability. This paper proposes some simplified linear matrix inequality (LMI) stability conditions for fractional-order linear and nonlinear systems. Then, the global stability analysis of fractional-order neural networks employs the results from the obtained LMI conditions. In the LMI form, the obtained results include the existence and uniqueness of equilibrium point and its global stability, which simplify and extend some previous work on the stability analysis of the fractional-order neural networks. Moreover, a generalized projective synchronization method between such neural systems is given, along with its corresponding LMI condition. Finally, two numerical examples are provided to illustrate the effectiveness of the established LMI conditions.

  2. Spacecraft Neural Network Control System Design using FPGA

    OpenAIRE

    Hanaa T. El-Madany; Faten H. Fahmy; Ninet M. A. El-Rahman; Hassen T. Dorrah

    2011-01-01

    Designing and implementing intelligent systems has become a crucial factor for the innovation and development of better products of space technologies. A neural network is a parallel system, capable of resolving paradigms that linear computing cannot. Field programmable gate array (FPGA) is a digital device that owns reprogrammable properties and robust flexibility. For the neural network based instrument prototype in real time application, conventional specific VLSI neural chip design suffer...

  3. IDENTIFICATION AND CONTROL OF AN ASYNCHRONOUS MACHINE USING NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    A ZERGAOUI

    2000-06-01

    Full Text Available In this work, we present the application of artificial neural networks to the identification and control of the asynchronous motor, which is a complex nonlinear system with variable internal dynamics.  We show that neural networks can be applied to control the stator currents of the induction motor.  The results of the different simulations are presented to evaluate the performance of the neural controller proposed.

  4. Research on quasi-dynamic calibration model of plastic sensitive element based on neural networks

    Science.gov (United States)

    Wang, Fang; Kong, Deren; Yang, Lixia; Zhang, Zouzou

    2017-08-01

    Quasi-dynamic calibration accuracy of the plastic sensitive element depends on the accuracy of the fitting model between pressure and deformation. By using the excellent nonlinear mapping ability of RBF (Radial Basis Function) neural network, a calibration model is established which use the peak pressure as the input and use the deformation of the plastic sensitive element as the output in this paper. The calibration experiments of a batch of copper cylinders are carried out on the quasi-dynamic pressure calibration device, which pressure range is within the range of 200MPa to 700MPa. The experiment data are acquired according to the standard pressure monitoring system. The network train and study are done to quasi dynamic calibration model based on neural network by using MATLAB neural network toolbox. Taking the testing samples as the research object, the prediction accuracy of neural network model is compared with the exponential fitting model and the second-order polynomial fitting model. The results show that prediction of the neural network model is most close to the testing samples, and the accuracy of prediction model based on neural network is better than 0.5%, respectively one order higher than the second-order polynomial fitting model and two orders higher than the exponential fitting model. The quasi-dynamic calibration model between pressure peak and deformation of plastic sensitive element, which is based on neural network, provides important basis for creating higher accuracy quasi-dynamic calibration table.

  5. Death and rebirth of neural activity in sparse inhibitory networks

    Science.gov (United States)

    Angulo-Garcia, David; Luccioli, Stefano; Olmi, Simona; Torcini, Alessandro

    2017-05-01

    Inhibition is a key aspect of neural dynamics playing a fundamental role for the emergence of neural rhythms and the implementation of various information coding strategies. Inhibitory populations are present in several brain structures, and the comprehension of their dynamics is strategical for the understanding of neural processing. In this paper, we clarify the mechanisms underlying a general phenomenon present in pulse-coupled heterogeneous inhibitory networks: inhibition can induce not only suppression of neural activity, as expected, but can also promote neural re-activation. In particular, for globally coupled systems, the number of firing neurons monotonically reduces upon increasing the strength of inhibition (neuronal death). However, the random pruning of connections is able to reverse the action of inhibition, i.e. in a random sparse network a sufficiently strong synaptic strength can surprisingly promote, rather than depress, the activity of neurons (neuronal rebirth). Thus, the number of firing neurons reaches a minimum value at some intermediate synaptic strength. We show that this minimum signals a transition from a regime dominated by neurons with a higher firing activity to a phase where all neurons are effectively sub-threshold and their irregular firing is driven by current fluctuations. We explain the origin of the transition by deriving a mean field formulation of the problem able to provide the fraction of active neurons as well as the first two moments of their firing statistics. The introduction of a synaptic time scale does not modify the main aspects of the reported phenomenon. However, for sufficiently slow synapses the transition becomes dramatic, and the system passes from a perfectly regular evolution to irregular bursting dynamics. In this latter regime the model provides predictions consistent with experimental findings for a specific class of neurons, namely the medium spiny neurons in the striatum.

  6. A case for spiking neural network simulation based on configurable multiple-FPGA systems.

    Science.gov (United States)

    Yang, Shufan; Wu, Qiang; Li, Renfa

    2011-09-01

    Recent neuropsychological research has begun to reveal that neurons encode information in the timing of spikes. Spiking neural network simulations are a flexible and powerful method for investigating the behaviour of neuronal systems. Simulation of the spiking neural networks in software is unable to rapidly generate output spikes in large-scale of neural network. An alternative approach, hardware implementation of such system, provides the possibility to generate independent spikes precisely and simultaneously output spike waves in real time, under the premise that spiking neural network can take full advantage of hardware inherent parallelism. We introduce a configurable FPGA-oriented hardware platform for spiking neural network simulation in this work. We aim to use this platform to combine the speed of dedicated hardware with the programmability of software so that it might allow neuroscientists to put together sophisticated computation experiments of their own model. A feed-forward hierarchy network is developed as a case study to describe the operation of biological neural systems (such as orientation selectivity of visual cortex) and computational models of such systems. This model demonstrates how a feed-forward neural network constructs the circuitry required for orientation selectivity and provides platform for reaching a deeper understanding of the primate visual system. In the future, larger scale models based on this framework can be used to replicate the actual architecture in visual cortex, leading to more detailed predictions and insights into visual perception phenomenon.

  7. Using neural networks for prediction of air pollution index in industrial city

    Science.gov (United States)

    Rahman, P. A.; Panchenko, A. A.; Safarov, A. M.

    2017-10-01

    This scientific paper is dedicated to the use of artificial neural networks for the ecological prediction of state of the atmospheric air of an industrial city for capability of the operative environmental decisions. In the paper, there is also the described development of two types of prediction models for determining of the air pollution index on the basis of neural networks: a temporal (short-term forecast of the pollutants content in the air for the nearest days) and a spatial (forecast of atmospheric pollution index in any point of city). The stages of development of the neural network models are briefly overviewed and description of their parameters is also given. The assessment of the adequacy of the prediction models, based on the calculation of the correlation coefficient between the output and reference data, is also provided. Moreover, due to the complexity of perception of the «neural network code» of the offered models by the ordinary users, the software implementations allowing practical usage of neural network models are also offered. It is established that the obtained neural network models provide sufficient reliable forecast, which means that they are an effective tool for analyzing and predicting the behavior of dynamics of the air pollution in an industrial city. Thus, this scientific work successfully develops the urgent matter of forecasting of the atmospheric air pollution index in industrial cities based on the use of neural network models.

  8. Experiments in Neural-Network Control of a Free-Flying Space Robot

    Science.gov (United States)

    Wilson, Edward

    1995-01-01

    Four important generic issues are identified and addressed in some depth in this thesis as part of the development of an adaptive neural network based control system for an experimental free flying space robot prototype. The first issue concerns the importance of true system level design of the control system. A new hybrid strategy is developed here, in depth, for the beneficial integration of neural networks into the total control system. A second important issue in neural network control concerns incorporating a priori knowledge into the neural network. In many applications, it is possible to get a reasonably accurate controller using conventional means. If this prior information is used purposefully to provide a starting point for the optimizing capabilities of the neural network, it can provide much faster initial learning. In a step towards addressing this issue, a new generic Fully Connected Architecture (FCA) is developed for use with backpropagation. A third issue is that neural networks are commonly trained using a gradient based optimization method such as backpropagation; but many real world systems have Discrete Valued Functions (DVFs) that do not permit gradient based optimization. One example is the on-off thrusters that are common on spacecraft. A new technique is developed here that now extends backpropagation learning for use with DVFs. The fourth issue is that the speed of adaptation is often a limiting factor in the implementation of a neural network control system. This issue has been strongly resolved in the research by drawing on the above new contributions.

  9. Electricity market price forecasting by grid computing optimizing artificial neural networks

    OpenAIRE

    Niimura, T.; Ozawa, K.; Sakamoto, N.

    2007-01-01

    This paper presents a grid computing approach to parallel-process a neural network time-series model for forecasting electricity market prices. A grid computing environment introduced in a university computing laboratory provides access to otherwise underused computing resources. The grid computing of the neural network model not only processes several times faster than a single iterative process, but also provides chances of improving forecasting accuracy. Results of numerical tests using re...

  10. Neural network for constrained nonsmooth optimization using Tikhonov regularization.

    Science.gov (United States)

    Qin, Sitian; Fan, Dejun; Wu, Guangxi; Zhao, Lijun

    2015-03-01

    This paper presents a one-layer neural network to solve nonsmooth convex optimization problems based on the Tikhonov regularization method. Firstly, it is shown that the optimal solution of the original problem can be approximated by the optimal solution of a strongly convex optimization problems. Then, it is proved that for any initial point, the state of the proposed neural network enters the equality feasible region in finite time, and is globally convergent to the unique optimal solution of the related strongly convex optimization problems. Compared with the existing neural networks, the proposed neural network has lower model complexity and does not need penalty parameters. In the end, some numerical examples and application are given to illustrate the effectiveness and improvement of the proposed neural network. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. 23rd Workshop of the Italian Neural Networks Society (SIREN)

    CERN Document Server

    Esposito, Anna; Morabito, Francesco

    2014-01-01

    This volume collects a selection of contributions which has been presented at the 23rd Italian Workshop on Neural Networks, the yearly meeting of the Italian Society for Neural Networks (SIREN). The conference was held in Vietri sul Mare, Salerno, Italy during May 23-24, 2013. The annual meeting of SIREN is sponsored by International Neural Network Society (INNS), European Neural Network Society (ENNS) and IEEE Computational Intelligence Society (CIS). The book – as well as the workshop-  is organized in two main components, a special session and a group of regular sessions featuring different aspects and point of views of artificial neural networks, artificial and natural intelligence, as well as psychological and cognitive theories for modeling human behaviors and human machine interactions, including Information Communication applications of compelling interest.  .

  12. Investigation of efficient features for image recognition by neural networks.

    Science.gov (United States)

    Goltsev, Alexander; Gritsenko, Vladimir

    2012-04-01

    In the paper, effective and simple features for image recognition (named LiRA-features) are investigated in the task of handwritten digit recognition. Two neural network classifiers are considered-a modified 3-layer perceptron LiRA and a modular assembly neural network. A method of feature selection is proposed that analyses connection weights formed in the preliminary learning process of a neural network classifier. In the experiments using the MNIST database of handwritten digits, the feature selection procedure allows reduction of feature number (from 60 000 to 7000) preserving comparable recognition capability while accelerating computations. Experimental comparison between the LiRA perceptron and the modular assembly neural network is accomplished, which shows that recognition capability of the modular assembly neural network is somewhat better. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. Issues in the use of neural networks in information retrieval

    CERN Document Server

    Iatan, Iuliana F

    2017-01-01

    This book highlights the ability of neural networks (NNs) to be excellent pattern matchers and their importance in information retrieval (IR), which is based on index term matching. The book defines a new NN-based method for learning image similarity and describes how to use fuzzy Gaussian neural networks to predict personality. It introduces the fuzzy Clifford Gaussian network, and two concurrent neural models: (1) concurrent fuzzy nonlinear perceptron modules, and (2) concurrent fuzzy Gaussian neural network modules. Furthermore, it explains the design of a new model of fuzzy nonlinear perceptron based on alpha level sets and describes a recurrent fuzzy neural network model with a learning algorithm based on the improved particle swarm optimization method.

  14. Neural Networks for Synthesis and Optimization of Antenna Arrays

    Directory of Open Access Journals (Sweden)

    S. A. Djennas

    2007-04-01

    Full Text Available This paper describes a usual application of back-propagation neural networks for synthesis and optimization of antenna array. The neural network is able to model and to optimize the antennas arrays, by acting on radioelectric or geometric parameters and by taking into account predetermined general criteria. The neural network allows not only establishing important analytical equations for the optimization step, but also a great flexibility between the system parameters in input and output. This step of optimization becomes then possible due to the explicit relation given by the neural network. According to different formulations of the synthesis problem such as acting on the feed law (amplitude and/or phase and/or space position of the radiating sources, results on antennas arrays synthesis and optimization by neural networks are presented and discussed. However ANN is able to generate very fast the results of synthesis comparing to other approaches.

  15. Sea level forecasts using neural networks

    Science.gov (United States)

    Röske, Frank

    1997-03-01

    In this paper, a new method for predicting the sea level employing a neural network approach is introduced. It was designed to improve the prediction of the sea level along the German North Sea Coast under standard conditions. The sea level at any given time depends upon the tides as well as meteorological and oceanographic factors, such as the winds and external surges induced by air pressure. Since tidal predictions are already sufficiently accurate, they have been subtracted from the observed sea levels. The differences will be predicted up to 18 hours in advance. In this paper, the differences are called anomalies. The prediction of the sea level each hour is distinguished from its predictions at the times of high and low tide. For this study, Cuxhaven was selected as a reference site. The predictions made using neural networks were compared for accuracy with the prognoses prepared using six models: two hydrodynamic models, a statistical model, a nearest neighbor model, which is based on analogies, the persistence model, and the verbal forecasts that are broadcast and kept on record by the Sea Level Forecast Service of the Federal Maritime and Hydrography Agency (BSH) in Hamburg. Predictions were calculated for the year 1993 and compared with the actual levels measured. Artificial neural networks are capable of learning. By applying them to the prediction of sea levels, learning from past events has been attempted. It was also attempted to make the experiences of expert forecasters objective. Instead of using the wide-spread back-propagation networks, the self-organizing feature map of Kohonen, or “Kohonen network”, was applied. The fundamental principle of this network is the transformation of the signal similarity into the neighborhood of the neurons while preserving the topology of the signal space. The self-organization procedure of Kohonen networks can be visualized. To make predictions, these networks have been subdivided into a part describing the

  16. Thermodynamic efficiency of learning a rule in neural networks

    Science.gov (United States)

    Goldt, Sebastian; Seifert, Udo

    2017-11-01

    Biological systems have to build models from their sensory input data that allow them to efficiently process previously unseen inputs. Here, we study a neural network learning a binary classification rule for these inputs from examples provided by a teacher. We analyse the ability of the network to apply the rule to new inputs, that is to generalise from past experience. Using stochastic thermodynamics, we show that the thermodynamic costs of the learning process provide an upper bound on the amount of information that the network is able to learn from its teacher for both batch and online learning. This allows us to introduce a thermodynamic efficiency of learning. We analytically compute the dynamics and the efficiency of a noisy neural network performing online learning in the thermodynamic limit. In particular, we analyse three popular learning algorithms, namely Hebbian, Perceptron and AdaTron learning. Our work extends the methods of stochastic thermodynamics to a new type of learning problem and might form a suitable basis for investigating the thermodynamics of decision-making.

  17. Neural bases of recommendations differ according to social network structure.

    Science.gov (United States)

    O'Donnell, Matthew Brook; Bayer, Joseph B; Cascio, Christopher N; Falk, Emily B

    2017-01-01

    Ideas spread across social networks, but not everyone is equally positioned to be a successful recommender. Do individuals with more opportunities to connect otherwise unconnected others-high information brokers-use their brains differently than low information brokers when making recommendations? We test the hypothesis that those with more opportunities for information brokerage may use brain systems implicated in considering the thoughts, perspectives, and mental states of others (i.e. 'mentalizing') more when spreading ideas. We used social network analysis to quantify individuals' opportunities for information brokerage. This served as a predictor of activity within meta-analytically defined neural regions associated with mentalizing (dorsomedial prefrontal cortex, temporal parietal junction, medial prefrontal cortex, /posterior cingulate cortex, middle temporal gyrus) as participants received feedback about peer opinions of mobile game apps. Higher information brokers exhibited more activity in this mentalizing network when receiving divergent peer feedback and updating their recommendation. These data support the idea that those in different network positions may use their brains differently to perform social tasks. Different social network positions might provide more opportunities to engage specific psychological processes. Or those who tend to engage such processes more may place themselves in systematically different network positions. These data highlight the value of integrating levels of analysis, from brain networks to social networks. © The Author (2017). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  18. Piecewise convexity of artificial neural networks.

    Science.gov (United States)

    Rister, Blaine; Rubin, Daniel L

    2017-10-01

    Although artificial neural networks have shown great promise in applications including computer vision and speech recognition, there remains considerable practical and theoretical difficulty in optimizing their parameters. The seemingly unreasonable success of gradient descent methods in minimizing these non-convex functions remains poorly understood. In this work we offer some theoretical guarantees for networks with piecewise affine activation functions, which have in recent years become the norm. We prove three main results. First, that the network is piecewise convex as a function of the input data. Second, that the network, considered as a function of the parameters in a single layer, all others held constant, is again piecewise convex. Third, that the network as a function of all its parameters is piecewise multi-convex, a generalization of biconvexity. From here we characterize the local minima and stationary points of the training objective, showing that they minimize the objective on certain subsets of the parameter space. We then analyze the performance of two optimization algorithms on multi-convex problems: gradient descent, and a method which repeatedly solves a number of convex sub-problems. We prove necessary convergence conditions for the first algorithm and both necessary and sufficient conditions for the second, after introducing regularization to the objective. Finally, we remark on the remaining difficulty of the global optimization problem. Under the squared error objective, we show that by varying the training data, a single rectifier neuron admits local minima arbitrarily far apart, both in objective value and parameter space. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. A neural network for noise correlation classification

    Science.gov (United States)

    Paitz, Patrick; Gokhberg, Alexey; Fichtner, Andreas

    2018-02-01

    We present an artificial neural network (ANN) for the classification of ambient seismic noise correlations into two categories, suitable and unsuitable for noise tomography. By using only a small manually classified data subset for network training, the ANN allows us to classify large data volumes with low human effort and to encode the valuable subjective experience of data analysts that cannot be captured by a deterministic algorithm. Based on a new feature extraction procedure that exploits the wavelet-like nature of seismic time-series, we efficiently reduce the dimensionality of noise correlation data, still keeping relevant features needed for automated classification. Using global- and regional-scale data sets, we show that classification errors of 20 per cent or less can be achieved when the network training is performed with as little as 3.5 per cent and 16 per cent of the data sets, respectively. Furthermore, the ANN trained on the regional data can be applied to the global data, and vice versa, without a significant increase of the classification error. An experiment where four students manually classified the data, revealed that the classification error they would assign to each other is substantially larger than the classification error of the ANN (>35 per cent). This indicates that reproducibility would be hampered more by human subjectivity than by imperfections of the ANN.

  20. Antagonistic neural networks underlying differentiated leadership roles.

    Science.gov (United States)

    Boyatzis, Richard E; Rochford, Kylie; Jack, Anthony I

    2014-01-01

    The emergence of two distinct leadership roles, the task leader and the socio-emotional leader, has been documented in the leadership literature since the 1950s. Recent research in neuroscience suggests that the division between task-oriented and socio-emotional-oriented roles derives from a fundamental feature of our neurobiology: an antagonistic relationship between two large-scale cortical networks - the task-positive network (TPN) and the default mode network (DMN). Neural activity in TPN tends to inhibit activity in the DMN, and vice versa. The TPN is important for problem solving, focusing of attention, making decisions, and control of action. The DMN plays a central role in emotional self-awareness, social cognition, and ethical decision making. It is also strongly linked to creativity and openness to new ideas. Because activation of the TPN tends to suppress activity in the DMN, an over-emphasis on task-oriented leadership may prove deleterious to social and emotional aspects of leadership. Similarly, an overemphasis on the DMN would result in difficulty focusing attention, making decisions, and solving known problems. In this paper, we will review major streams of theory and research on leadership roles in the context of recent findings from neuroscience and psychology. We conclude by suggesting that emerging research challenges the assumption that role differentiation is both natural and necessary, in particular when openness to new ideas, people, emotions, and ethical concerns are important to success.

  1. Antagonistic Neural Networks Underlying Differentiated Leadership Roles

    Directory of Open Access Journals (Sweden)

    Richard Eleftherios Boyatzis

    2014-03-01

    Full Text Available The emergence of two distinct leadership roles, the task leader and the socio-emotional leader, has been documented in the leadership literature since the 1950’s. Recent research in neuroscience suggests that the division between task oriented and socio-emotional oriented roles derives from a fundamental feature of our neurobiology: an antagonistic relationship between two large-scale cortical networks -- the Task Positive Network (TPN and the Default Mode Network (DMN. Neural activity in TPN tends to inhibit activity in the DMN, and vice versa. The TPN is important for problem solving, focusing of attention, making decisions, and control of action. The DMN plays a central role in emotional self-awareness, social cognition, and ethical decision making. It is also strongly linked to creativity and openness to new ideas. Because activation of the TPN tends to suppress activity in the DMN, an over-emphasis on task oriented leadership may prove deleterious to social and emotional aspects of leadership. Similarly, an overemphasis on the DMN would result in difficulty focusing attention, making decisions and solving known problems. In this paper, we will review major streams of theory and research on leadership roles in the context of recent findings from neuroscience and psychology. We conclude by suggesting that emerging research challenges the assumption that role differentiation is both natural and necessary, in particular when openness to new ideas, people, emotions, and ethical concerns are important to success.

  2. Antagonistic neural networks underlying differentiated leadership roles

    Science.gov (United States)

    Boyatzis, Richard E.; Rochford, Kylie; Jack, Anthony I.

    2014-01-01

    The emergence of two distinct leadership roles, the task leader and the socio-emotional leader, has been documented in the leadership literature since the 1950s. Recent research in neuroscience suggests that the division between task-oriented and socio-emotional-oriented roles derives from a fundamental feature of our neurobiology: an antagonistic relationship between two large-scale cortical networks – the task-positive network (TPN) and the default mode network (DMN). Neural activity in TPN tends to inhibit activity in the DMN, and vice versa. The TPN is important for problem solving, focusing of attention, making decisions, and control of action. The DMN plays a central role in emotional self-awareness, social cognition, and ethical decision making. It is also strongly linked to creativity and openness to new ideas. Because activation of the TPN tends to suppress activity in the DMN, an over-emphasis on task-oriented leadership may prove deleterious to social and emotional aspects of leadership. Similarly, an overemphasis on the DMN would result in difficulty focusing attention, making decisions, and solving known problems. In this paper, we will review major streams of theory and research on leadership roles in the context of recent findings from neuroscience and psychology. We conclude by suggesting that emerging research challenges the assumption that role differentiation is both natural and necessary, in particular when openness to new ideas, people, emotions, and ethical concerns are important to success. PMID:24624074

  3. Shaping embodied neural networks for adaptive goal-directed behavior.

    Directory of Open Access Journals (Sweden)

    Zenas C Chao

    2008-03-01

    Full Text Available The acts of learning and memory are thought to emerge from the modifications of synaptic connections between neurons, as guided by sensory feedback during behavior. However, much is unknown about how such synaptic processes can sculpt and are sculpted by neuronal population dynamics and an interaction with the environment. Here, we embodied a simulated network, inspired by dissociated cortical neuronal cultures, with an artificial animal (an animat through a sensory-motor loop consisting of structured stimuli, detailed activity metrics incorporating spatial information, and an adaptive training algorithm that takes advantage of spike timing dependent plasticity. By using our design, we demonstrated that the network was capable of learning associations between multiple sensory inputs and motor outputs, and the animat was able to adapt to a new sensory mapping to restore its goal behavior: move toward and stay within a user-defined area. We further showed that successful learning required proper selections of stimuli to encode sensory inputs and a variety of training stimuli with adaptive selection contingent on the animat's behavior. We also found that an individual network had the flexibility to achieve different multi-task goals, and the same goal behavior could be exhibited with different sets of network synaptic strengths. While lacking the characteristic layered structure of in vivo cortical tissue, the biologically inspired simulated networks could tune their activity in behaviorally relevant manners, demonstrating that leaky integrate-and-fire neural networks have an innate ability to process information. This closed-loop hybrid system is a useful tool to study the network properties intermediating synaptic plasticity and behavioral adaptation. The training algorithm provides a stepping stone towards designing future control systems, whether with artificial neural networks or biological animats themselves.

  4. Neural networks forecast in small catchments with transfer of network parameters

    Science.gov (United States)

    Maca, P.; Havlicek, V.; Hermanovsky, M.; Horacek, S.; Pech, P.

    2009-04-01

    This contribution deals with neural network approach for short term forecast on small catchments. The applied methodology is based on theory of multilayer perceptron (MLP), feed forward neural network with back propagation optimization procedure was tested in order to explore the possibilities to transfer parameters between different catchments. Supervised optimization of network parameters and structure was investigated. A software tool was created for these research and operative purposes. The hourly discharges and rainfall data of real flood events served as an input to MLP. Seven catchments with areas, which range from 10 to 250 square kilometres and which are situated in the east part of the Czech Republic, were selected. The input data were normalized by parametric method. Variable configuration of neural network was tested in number of modes represented by different combination of learning and testing data sets. The analysis focuses on ability of the model to forecast the flood event with different peak discharge magnitudes. This should be achieved in both application steps - MLP learning and testing within given catchment and in step of parameter transfer of well learned network to another catchment. The length of prediction ranged from one hour to six hours ahead. The results showed that the model is capable to provide satisfying short term discharge forecast for the most of studied cases, including successful parameter transfer among different catchments. This was accomplished by using optimization of parameters which determine not only the structure and behaviour of applied network but also the transformation of input data.

  5. Neural synchrony in cortical networks: history, concept and current status

    Directory of Open Access Journals (Sweden)

    Peter Uhlhaas

    2009-07-01

    Full Text Available Following the discovery of context-dependent synchronization of oscillatory neuronal responses in the visual system, the role of neural synchrony in cortical networks has been expanded to provide a general mechanism for the coordination of distributed neural activity patterns. In the current paper, we present an update of the status of this hypothesis through summarizing recent results from our laboratory that suggest important new insights regarding the mechanisms, function and relevance of this phenomenon. In the first part, we present recent results derived from animal experiments and mathematical simulations that provide novel explanations and mechanisms for zero and nero-zero phase lag synchronization. In the second part, we shall discuss the role of neural synchrony for expectancy during perceptual organization and its role in conscious experience. This will be followed by evidence that indicates that in addition to supporting conscious cognition, neural synchrony is abnormal in major brain disorders, such as schizophrenia and autism spectrum disorders. We conclude this paper with suggestions for further research as well as with critical issues that need to be addressed in future studies.

  6. Application of neural network for real-time measurement of electrical resistivity in cold crucible

    Science.gov (United States)

    Votava, Pavel; Poznyak, Igor

    2017-08-01

    The article describes use of an Induction furnace with cold crucible as a tool for real-time measurement of a melted material electrical resistivity. The measurement is based on an inverse problem solution of a 2D mathematical model, possibly implementable in a microcontroller or a FPGA in a form of a neural network. The 2D mathematical model results has been provided as a training set for the neural network. At the end, the implementation results are discussed together with uncertainty of measurement, which is done by the neural network implementation itself.

  7. Combining neural networks for protein secondary structure prediction

    DEFF Research Database (Denmark)

    Riis, Søren Kamaric

    1995-01-01

    In this paper structured neural networks are applied to the problem of predicting the secondary structure of proteins. A hierarchical approach is used where specialized neural networks are designed for each structural class and then combined using another neural network. The submodels are designed...... by using a priori knowledge of the mapping between protein building blocks and the secondary structure and by using weight sharing. Since none of the individual networks have more than 600 adjustable weights over-fitting is avoided. When ensembles of specialized experts are combined the performance...

  8. Ship Benchmark Shaft and Engine Gain FDI Using Neural Network

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Izadi-Zamanabadi, Roozbeh

    2002-01-01

    This paper concerns fault detection and isolation based on neural network modeling. A neural network is trained to recognize the input-output behavior of a nonlinear plant, and faults are detected if the output estimated by the network differs from the measured plant output by more than a specified...... threshold value. In the paper a method for determining this threshold based on the neural network model is proposed, which can be used for a design strategy to handle residual sensitivity to input variations. The proposed method is used for successful FDI of a diesel engine gain fault in a ship propulsion...

  9. Forex Market Prediction Using NARX Neural Network with Bagging

    Directory of Open Access Journals (Sweden)

    Shahbazi Nima

    2016-01-01

    Full Text Available We propose a new methodfor predicting movements in Forex market based on NARX neural network withtime shifting bagging techniqueand financial indicators, such as relative strength index and stochastic indicators. Neural networks have prominent learning ability but they often exhibit bad and unpredictable performance for noisy data. When compared with the static neural networks, our method significantly reducesthe error rate of the responseandimproves the performance of the prediction. We tested three different types ofarchitecture for predicting the response and determined the best network approach. We applied our method to prediction the hourly foreign exchange rates and found remarkable predictability in comprehensive experiments with 2 different foreign exchange rates (GBPUSD and EURUSD.

  10. Recognition of Gestures using Artifical Neural Network

    Directory of Open Access Journals (Sweden)

    Marcel MORE

    2013-12-01

    Full Text Available Sensors for motion measurements are now becoming more widespread. Thanks to their parameters and affordability they are already used not only in the professional sector, but also in devices intended for daily use or entertainment. One of their applications is in control of devices by gestures. Systems that can determine type of gesture from measured motion have many uses. Some are for example in medical practice, but they are still more often used in devices such as cell phones, where they serve as a non-standard form of input. Today there are already several approaches for solving this problem, but building sufficiently reliable system is still a challenging task. In our project we are developing solution based on artificial neural network. In difference to other solutions, this one doesn’t require building model for each measuring system and thus it can be used in combination with various sensors just with minimal changes in his structure.

  11. Spatial Dynamics of Multilayer Cellular Neural Networks

    Science.gov (United States)

    Wu, Shi-Liang; Hsu, Cheng-Hsiung

    2018-02-01

    The purpose of this work is to study the spatial dynamics of one-dimensional multilayer cellular neural networks. We first establish the existence of rightward and leftward spreading speeds of the model. Then we show that the spreading speeds coincide with the minimum wave speeds of the traveling wave fronts in the right and left directions. Moreover, we obtain the asymptotic behavior of the traveling wave fronts when the wave speeds are positive and greater than the spreading speeds. According to the asymptotic behavior and using various kinds of comparison theorems, some front-like entire solutions are constructed by combining the rightward and leftward traveling wave fronts with different speeds and a spatially homogeneous solution of the model. Finally, various qualitative features of such entire solutions are investigated.

  12. Enhancing Hohlraum Design with Artificial Neural Networks

    Science.gov (United States)

    Peterson, J. L.; Berzak Hopkins, L. F.; Humbird, K. D.; Brandon, S. T.; Field, J. E.; Langer, S. H.; Nora, R. C.; Spears, B. K.

    2017-10-01

    A primary goal of hohlraum design is to efficiently convert available laser power and energy to capsule drive, compression and ultimately fusion neutron yield. However, a major challenge of this multi-dimensional optimization problem is the relative computational expense of hohlraum simulations. In this work, we explore overcoming this obstacle with the use of artificial neural networks built off ensembles of hohlraum simulations. These machine learning systems emulate the behavior of full simulations in a fraction of the time, thereby enabling the rapid exploration of design parameters. We will demonstrate this technology with a search for modifications to existing high-yield designs that can maximize neutron production within NIF's current laser power and energy constraints. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-734401.

  13. Tropical Timber Identification using Backpropagation Neural Network

    Science.gov (United States)

    Siregar, B.; Andayani, U.; Fatihah, N.; Hakim, L.; Fahmi, F.

    2017-01-01

    Each and every type of wood has different characteristics. Identifying the type of wood properly is important, especially for industries that need to know the type of timber specifically. However, it requires expertise in identifying the type of wood and only limited experts available. In addition, the manual identification even by experts is rather inefficient because it requires a lot of time and possibility of human errors. To overcome these problems, a digital image based method to identify the type of timber automatically is needed. In this study, backpropagation neural network is used as artificial intelligence component. Several stages were developed: a microscope image acquisition, pre-processing, feature extraction using gray level co-occurrence matrix and normalization of data extraction using decimal scaling features. The results showed that the proposed method was able to identify the timber with an accuracy of 94%.

  14. Thrips (Thysanoptera) identification using artificial neural networks.

    Science.gov (United States)

    Fedor, P; Malenovský, I; Vanhara, J; Sierka, W; Havel, J

    2008-10-01

    We studied the use of a supervised artificial neural network (ANN) model for semi-automated identification of 18 common European species of Thysanoptera from four genera: Aeolothrips Haliday (Aeolothripidae), Chirothrips Haliday, Dendrothrips Uzel, and Limothrips Haliday (all Thripidae). As input data, we entered 17 continuous morphometric and two qualitative two-state characters measured or determined on different parts of the thrips body (head, pronotum, forewing and ovipositor) and the sex. Our experimental data set included 498 thrips specimens. A relatively simple ANN architecture (multilayer perceptrons with a single hidden layer) enabled a 97% correct simultaneous identification of both males and females of all the 18 species in an independent test. This high reliability of classification is promising for a wider application of ANN in the practice of Thysanoptera identification.

  15. Neural network training as a dissipative process.

    Science.gov (United States)

    Gori, Marco; Maggini, Marco; Rossi, Alessandro

    2016-09-01

    This paper analyzes the practical issues and reports some results on a theory in which learning is modeled as a continuous temporal process driven by laws describing the interactions of intelligent agents with their own environment. The classic regularization framework is paired with the idea of temporal manifolds by introducing the principle of least cognitive action, which is inspired by the related principle of mechanics. The introduction of the counterparts of the kinetic and potential energy leads to an interpretation of learning as a dissipative process. As an example, we apply the theory to supervised learning in neural networks and show that the corresponding Euler-Lagrange differential equations can be connected to the classic gradient descent algorithm on the supervised pairs. We give preliminary experiments to confirm the soundness of the theory. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Neural networks in support of manned space

    Science.gov (United States)

    Werbos, Paul J.

    1989-01-01

    Many lobbyists in Washington have argued that artificial intelligence (AI) is an alternative to manned space activity. In actuality, this is the opposite of the truth, especially as regards artificial neural networks (ANNs), that form of AI which has the greatest hope of mimicking human abilities in learning, ability to interface with sensors and actuators, flexibility and balanced judgement. ANNs and their relation to expert systems (the more traditional form of AI), and the limitations of both technologies are briefly reviewed. A Few highlights of recent work on ANNs, including an NSF-sponsored workshop on ANNs for control applications are given. Current thinking on ANNs for use in certain key areas (the National Aerospace Plane, teleoperation, the control of large structures, fault diagnostics, and docking) which may be crucial to the long term future of man in space is discussed.

  17. A growing and pruning sequential learning algorithm of hyper basis function neural network for function approximation.

    Science.gov (United States)

    Vuković, Najdan; Miljković, Zoran

    2013-10-01

    Radial basis function (RBF) neural network is constructed of certain number of RBF neurons, and these networks are among the most used neural networks for modeling of various nonlinear problems in engineering. Conventional RBF neuron is usually based on Gaussian type of activation function with single width for each activation function. This feature restricts neuron performance for modeling the complex nonlinear problems. To accommodate limitation of a single scale, this paper presents neural network with similar but yet different activation function-hyper basis function (HBF). The HBF allows different scaling of input dimensions to provide better generalization property when dealing with complex nonlinear problems in engineering practice. The HBF is based on generalization of Gaussian type of neuron that applies Mahalanobis-like distance as a distance metrics between input training sample and prototype vector. Compared to the RBF, the HBF neuron has more parameters to optimize, but HBF neural network needs less number of HBF neurons to memorize relationship between input and output sets in order to achieve good generalization property. However, recent research results of HBF neural network performance have shown that optimal way of constructing this type of neural network is needed; this paper addresses this issue and modifies sequential learning algorithm for HBF neural network that exploits the concept of neuron's significance and allows growing and pruning of HBF neuron during learning process. Extensive experimental study shows that HBF neural network, trained with developed learning algorithm, achieves lower prediction error and more compact neural network. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Financial time series prediction using spiking neural networks.

    Directory of Open Access Journals (Sweden)

    David Reid

    Full Text Available In this paper a novel application of a particular type of spiking neural network, a Polychronous Spiking Network, was used for financial time series prediction. It is argued that the inherent temporal capabilities of this type of network are suited to non-stationary data such as this. The performance of the spiking neural network was benchmarked against three systems: two "traditional", rate-encoded, neural networks; a Multi-Layer Perceptron neural network and a Dynamic Ridge Polynomial neural network, and a standard Linear Predictor Coefficients model. For this comparison three non-stationary and noisy time series were used: IBM stock data; US/Euro exchange rate data, and the price of Brent crude oil. The experiments demonstrated favourable prediction results for the Spiking Neural Network in terms of Annualised Return and prediction error for 5-Step ahead predictions. These results were also supported by other relevant metrics such as Maximum Drawdown and Signal-To-Noise ratio. This work demonstrated the applicability of the Polychronous Spiking Network to financial data forecasting and this in turn indicates the potential of using such networks over traditional systems in difficult to manage non-stationary environments.

  19. [Robustness analysis of adaptive neural network model based on spike timing-dependent plasticity].

    Science.gov (United States)

    Chen, Yunzhi; Xu, Guizhi; Zhou, Qian; Guo, Miaomiao; Guo, Lei; Wan, Xiaowei

    2015-02-01

    To explore the self-organization robustness of the biological neural network, and thus to provide new ideas and methods for the electromagnetic bionic protection, we studied both the information transmission mechanism of neural network and spike timing-dependent plasticity (STDP) mechanism, and then investigated the relationship between synaptic plastic and adaptive characteristic of biology. Then a feedforward neural network with the Izhikevich model and the STDP mechanism was constructed, and the adaptive robust capacity of the network was analyzed. Simulation results showed that the neural network based on STDP mechanism had good rubustness capacity, and this characteristics is closely related to the STDP mechanisms. Based on this simulation work, the cell circuit with neurons and synaptic circuit which can simulate the information processing mechanisms of biological nervous system will be further built, then the electronic circuits with adaptive robustness will be designed based on the cell circuit.

  20. Wavelet neural networks with applications in financial engineering, chaos, and classification

    CERN Document Server

    Alexandridis, Antonios K

    2014-01-01

    Through extensive examples and case studies, Wavelet Neural Networks provides a step-by-step introduction to modeling, training, and forecasting using wavelet networks. The acclaimed authors present a statistical model identification framework to successfully apply wavelet networks in various applications, specifically, providing the mathematical and statistical framework needed for model selection, variable selection, wavelet network construction, initialization, training, forecasting and prediction, confidence intervals, prediction intervals, and model adequacy testing. The text is ideal for

  1. A Parallel Adaboost-Backpropagation Neural Network for Massive Image Dataset Classification

    Science.gov (United States)

    Cao, Jianfang; Chen, Lichao; Wang, Min; Shi, Hao; Tian, Yun

    2016-12-01

    Image classification uses computers to simulate human understanding and cognition of images by automatically categorizing images. This study proposes a faster image classification approach that parallelizes the traditional Adaboost-Backpropagation (BP) neural network using the MapReduce parallel programming model. First, we construct a strong classifier by assembling the outputs of 15 BP neural networks (which are individually regarded as weak classifiers) based on the Adaboost algorithm. Second, we design Map and Reduce tasks for both the parallel Adaboost-BP neural network and the feature extraction algorithm. Finally, we establish an automated classification model by building a Hadoop cluster. We use the Pascal VOC2007 and Caltech256 datasets to train and test the classification model. The results are superior to those obtained using traditional Adaboost-BP neural network or parallel BP neural network approaches. Our approach increased the average classification accuracy rate by approximately 14.5% and 26.0% compared to the traditional Adaboost-BP neural network and parallel BP neural network, respectively. Furthermore, the proposed approach requires less computation time and scales very well as evaluated by speedup, sizeup and scaleup. The proposed approach may provide a foundation for automated large-scale image classification and demonstrates practical value.

  2. High-Dimensional Function Approximation With Neural Networks for Large Volumes of Data.

    Science.gov (United States)

    Andras, Peter

    2018-02-01

    Approximation of high-dimensional functions is a challenge for neural networks due to the curse of dimensionality. Often the data for which the approximated function is defined resides on a low-dimensional manifold and in principle the approximation of the function over this manifold should improve the approximation performance. It has been show that projecting the data manifold into a lower dimensional space, followed by the neural network approximation of the function over this space, provides a more precise approximation of the function than the approximation of the function with neural networks in the original data space. However, if the data volume is very large, the projection into the low-dimensional space has to be based on a limited sample of the data. Here, we investigate the nature of the approximation error of neural networks trained over the projection space. We show that such neural networks should have better approximation performance than neural networks trained on high-dimensional data even if the projection is based on a relatively sparse sample of the data manifold. We also find that it is preferable to use a uniformly distributed sparse sample of the data for the purpose of the generation of the low-dimensional projection. We illustrate these results considering the practical neural network approximation of a set of functions defined on high-dimensional data including real world data as well.

  3. A One-Layer Recurrent Neural Network for Pseudoconvex Optimization Problems With Equality and Inequality Constraints.

    Science.gov (United States)

    Qin, Sitian; Yang, Xiudong; Xue, Xiaoping; Song, Jiahui

    2017-10-01

    Pseudoconvex optimization problem, as an important nonconvex optimization problem, plays an important role in scientific and engineering applications. In this paper, a recurrent one-layer neural network is proposed for solving the pseudoconvex optimization problem with equality and inequality constraints. It is proved that from any initial state, the state of the proposed neural network reaches the feasible region in finite time and stays there thereafter. It is also proved that the state of the proposed neural network is convergent to an optimal solution of the related problem. Compared with the related existing recurrent neural networks for the pseudoconvex optimization problems, the proposed neural network in this paper does not need the penalty parameters and has a better convergence. Meanwhile, the proposed neural network is used to solve three nonsmooth optimization problems, and we make some detailed comparisons with the known related conclusions. In the end, some numerical examples are provided to illustrate the effectiveness of the performance of the proposed neural network.

  4. Modeling Slump of Ready Mix Concrete Using Genetically Evolved Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Vinay Chandwani

    2014-01-01

    Full Text Available Artificial neural networks (ANNs have been the preferred choice for modeling the complex and nonlinear material behavior where conventional mathematical approaches do not yield the desired accuracy and predictability. Despite their popularity as a universal function approximator and wide range of applications, no specific rules for deciding the architecture of neural networks catering to a specific modeling task have been formulated. The research paper presents a methodology for automated design of neural network architecture, replacing the conventional trial and error technique of finding the optimal neural network. The genetic algorithms (GA stochastic search has been harnessed for evolving the optimum number of hidden layer neurons, transfer function, learning rate, and momentum coefficient for backpropagation ANN. The methodology has been applied for modeling slump of ready mix concrete based on its design mix constituents, namely, cement, fly ash, sand, coarse aggregates, admixture, and water-binder ratio. Six different statistical performance measures have been used for evaluating the performance of the trained neural networks. The study showed that, in comparison to conventional trial and error technique of deciding the neural network architecture and training parameters, the neural network architecture evolved through GA was of reduced complexity and provided better prediction performance.

  5. A Parallel Adaboost-Backpropagation Neural Network for Massive Image Dataset Classification

    Science.gov (United States)

    Cao, Jianfang; Chen, Lichao; Wang, Min; Shi, Hao; Tian, Yun

    2016-01-01

    Image classification uses computers to simulate human understanding and cognition of images by automatically categorizing images. This study proposes a faster image classification approach that parallelizes the traditional Adaboost-Backpropagation (BP) neural network using the MapReduce parallel programming model. First, we construct a strong classifier by assembling the outputs of 15 BP neural networks (which are individually regarded as weak classifiers) based on the Adaboost algorithm. Second, we design Map and Reduce tasks for both the parallel Adaboost-BP neural network and the feature extraction algorithm. Finally, we establish an automated classification model by building a Hadoop cluster. We use the Pascal VOC2007 and Caltech256 datasets to train and test the classification model. The results are superior to those obtained using traditional Adaboost-BP neural network or parallel BP neural network approaches. Our approach increased the average classification accuracy rate by approximately 14.5% and 26.0% compared to the traditional Adaboost-BP neural network and parallel BP neural network, respectively. Furthermore, the proposed approach requires less computation time and scales very well as evaluated by speedup, sizeup and scaleup. The proposed approach may provide a foundation for automated large-scale image classification and demonstrates practical value. PMID:27905520

  6. Background rejection in NEXT using deep neural networks

    CERN Document Server

    Renner, J.

    2017-01-01

    We investigate the potential of using deep learning techniques to reject background events in searches for neutrinoless double beta decay with high pressure xenon time projection chambers capable of detailed track reconstruction. The differences in the topological signatures of background and signal events can be learned by deep neural networks via training over many thousands of events. These networks can then be used to classify further events as signal or background, providing an additional background rejection factor at an acceptable loss of efficiency. The networks trained in this study performed better than previous methods developed based on the use of the same topological signatures by a factor of 1.2 to 1.6, and there is potential for further improvement.

  7. Signal Processing in Periodically Forced Gradient Frequency Neural Networks.

    Science.gov (United States)

    Kim, Ji Chul; Large, Edward W

    2015-01-01

    Oscillatory instability at the Hopf bifurcation is a dynamical phenomenon that has been suggested to characterize active non-linear processes observed in the auditory system. Networks of oscillators poised near Hopf bifurcation points and tuned to tonotopically distributed frequencies have been used as models of auditory processing at various levels, but systematic investigation of the dynamical properties of such oscillatory networks is still lacking. Here we provide a dynamical systems analysis of a canonical model for gradient frequency neural networks driven by a periodic signal. We use linear stability analysis to identify various driven behaviors of canonical oscillators for all possible ranges of model and forcing parameters. The analysis shows that canonical oscillators exhibit qualitatively different sets of driven states and transitions for different regimes of model parameters. We classify the parameter regimes into four main categories based on their distinct signal processing capabilities. This analysis will lead to deeper understanding of the diverse behaviors of neural systems under periodic forcing and can inform the design of oscillatory network models of auditory signal processing.

  8. Decorrelated jet substructure tagging using adversarial neural networks

    Science.gov (United States)

    Shimmin, Chase; Sadowski, Peter; Baldi, Pierre; Weik, Edison; Whiteson, Daniel; Goul, Edward; Søgaard, Andreas

    2017-10-01

    We describe a strategy for constructing a neural network jet substructure tagger which powerfully discriminates boosted decay signals while remaining largely uncorrelated with the jet mass. This reduces the impact of systematic uncertainties in background modeling while enhancing signal purity, resulting in improved discovery significance relative to existing taggers. The network is trained using an adversarial strategy, resulting in a tagger that learns to balance classification accuracy with decorrelation. As a benchmark scenario, we consider the case where large-radius jets originating from a boosted resonance decay are discriminated from a background of nonresonant quark and gluon jets. We show that in the presence of systematic uncertainties on the background rate, our adversarially trained, decorrelated tagger considerably outperforms a conventionally trained neural network, despite having a slightly worse signal-background separation power. We generalize the adversarial training technique to include a parametric dependence on the signal hypothesis, training a single network that provides optimized, interpolatable decorrelated jet tagging across a continuous range of hypothetical resonance masses, after training on discrete choices of the signal mass.

  9. Architecture Analysis of an FPGA-Based Hopfield Neural Network

    Directory of Open Access Journals (Sweden)

    Miguel Angelo de Abreu de Sousa

    2014-01-01

    Full Text Available Interconnections between electronic circuits and neural computation have been a strongly researched topic in the machine learning field in order to approach several practical requirements, including decreasing training and operation times in high performance applications and reducing cost, size, and energy consumption for autonomous or embedded developments. Field programmable gate array (FPGA hardware shows some inherent features typically associated with neural networks, such as, parallel processing, modular executions, and dynamic adaptation, and works on different types of FPGA-based neural networks were presented in recent years. This paper aims to address different aspects of architectural characteristics analysis on a Hopfield Neural Network implemented in FPGA, such as maximum operating frequency and chip-area occupancy according to the network capacity. Also, the FPGA implementation methodology, which does not employ multipliers in the architecture developed for the Hopfield neural model, is presented, in detail.

  10. ARTIFICIAL NEURAL NETWORK FOR MODELS OF HUMAN OPERATOR

    Directory of Open Access Journals (Sweden)

    Martin Ruzek

    2017-12-01

    Full Text Available This paper presents a new approach to mental functions modeling with the use of artificial neural networks. The artificial neural networks seems to be a promising method for the modeling of a human operator because the architecture of the ANN is directly inspired by the biological neuron. On the other hand, the classical paradigms of artificial neural networks are not suitable because they simplify too much the real processes in biological neural network. The search for a compromise between the complexity of biological neural network and the practical feasibility of the artificial network led to a new learning algorithm. This algorithm is based on the classical multilayered neural network; however, the learning rule is different. The neurons are updating their parameters in a way that is similar to real biological processes. The basic idea is that the neurons are competing for resources and the criterion to decide which neuron will survive is the usefulness of the neuron to the whole neural network. The neuron is not using "teacher" or any kind of superior system, the neuron receives only the information that is present in the biological system. The learning process can be seen as searching of some equilibrium point that is equal to a state with maximal importance of the neuron for the neural network. This position can change if the environment changes. The name of this type of learning, the homeostatic artificial neural network, originates from this idea, as it is similar to the process of homeostasis known in any living cell. The simulation results suggest that this type of learning can be useful also in other tasks of artificial learning and recognition.

  11. Discriminating lysosomal membrane protein types using dynamic neural network.

    Science.gov (United States)

    Tripathi, Vijay; Gupta, Dwijendra Kumar

    2014-01-01

    This work presents a dynamic artificial neural network methodology, which classifies the proteins into their classes from their sequences alone: the lysosomal membrane protein classes and the various other membranes protein classes. In this paper, neural networks-based lysosomal-associated membrane protein type prediction system is proposed. Different protein sequence representations are fused to extract the features of a protein sequence, which includes seven feature sets; amino acid (AA) composition, sequence length, hydrophobic group, electronic group, sum of hydrophobicity, R-group, and dipeptide composition. To reduce the dimensionality of the large feature vector, we applied the principal component analysis. The probabilistic neural network, generalized regression neural network, and Elman regression neural network (RNN) are used as classifiers and compared with layer recurrent network (LRN), a dynamic network. The dynamic networks have memory, i.e. its output depends not only on the input but the previous outputs also. Thus, the accuracy of LRN classifier among all other artificial neural networks comes out to be the highest. The overall accuracy of jackknife cross-validation is 93.2% for the data-set. These predicted results suggest that the method can be effectively applied to discriminate lysosomal associated membrane proteins from other membrane proteins (Type-I, Outer membrane proteins, GPI-Anchored) and Globular proteins, and it also indicates that the protein sequence representation can better reflect the core feature of membrane proteins than the classical AA composition.

  12. Toward Petascale Biologically Plausible Neural Networks

    Science.gov (United States)

    Long, Lyle

    This talk will describe an approach to achieving petascale neural networks. Artificial intelligence has been oversold for many decades. Computers in the beginning could only do about 16,000 operations per second. Computer processing power, however, has been doubling every two years thanks to Moore's law, and growing even faster due to massively parallel architectures. Finally, 60 years after the first AI conference we have computers on the order of the performance of the human brain (1016 operations per second). The main issues now are algorithms, software, and learning. We have excellent models of neurons, such as the Hodgkin-Huxley model, but we do not know how the human neurons are wired together. With careful attention to efficient parallel computing, event-driven programming, table lookups, and memory minimization massive scale simulations can be performed. The code that will be described was written in C + + and uses the Message Passing Interface (MPI). It uses the full Hodgkin-Huxley neuron model, not a simplified model. It also allows arbitrary network structures (deep, recurrent, convolutional, all-to-all, etc.). The code is scalable, and has, so far, been tested on up to 2,048 processor cores using 107 neurons and 109 synapses.

  13. Enhancing optical communication with deep neural networks

    Science.gov (United States)

    Lohani, Sanjaya; Knutson, Erin; Tkach, Sam; Huver, Sean; Glasser, Ryan; Tulane University Collaboration; Deep Science AI Collaboration

    The spatial profile of optical modes may be altered such that they contain nonzero orbital angular momentum (OAM). Laguerre-Gauss (LG) states of light have a helical wavefront and well-defined OAM, and have recently been shown to allow for larger information transfer rates in optical communications as compared to using only Gaussian modes. A primary difficulty, however, is the accurate classification of different OAM optical states, which contain different values of OAM, in the detection stage. The difficulty in this differentiation increases as larger degrees of OAM are used. Here we show the performance of deep neural networks in the simultaneous classification of numerically generated, noisy, Laguerre-Gauss states with OAM value up to 100 can reach near 100% accuracy. This method relies only on the intensity profile of the detected OAM states, avoiding bulky and difficult-to-implement methods that are required to measure the phase profile of the modes in the receiver of the communication platform. This allows for a simplification in the network design and an increase in performance when using states with large degrees of OAM. We anticipate that this approach will allow for significant advances in the development of optical communication technologies. We acknowledge funding from the Louisiana State Board of Regents and Northrop Grumman - NG NEXT.

  14. Forecasting Water Levels Using Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Shreenivas N. Londhe

    2011-06-01

    Full Text Available For all Ocean related activities it is necessary to predict the actual water levels as accurate as possible. The present work aims at predicting the water levels with a lead time of few hours to a day using the technique of artificial neural networks. Instead of using the previous and current values of observed water level time series directly as input and output the water level anomaly (difference between the observed water level and harmonically predicted tidal level is calculated for each hour and the ANN model is developed using this time series. The network predicted anomaly is then added to harmonic tidal level to predict the water levels. The exercise is carried out at six locations, two in The Gulf of Mexico, two in The Gulf of Maine and two in The Gulf of Alaska along the USA coastline. The ANN models performed reasonably well for all forecasting intervals at all the locations. The ANN models were also run in real time mode for a period of eight months. Considering the hurricane season in Gulf of Mexico the models were also tested particularly during hurricanes.

  15. Reject mechanisms for massively parallel neural network character recognition systems

    Science.gov (United States)

    Garris, Michael D.; Wilson, Charles L.

    1992-12-01

    Two reject mechanisms are compared using a massively parallel character recognition system implemented at NIST. The recognition system was designed to study the feasibility of automatically recognizing hand-printed text in a loosely constrained environment. The first method is a simple scalar threshold on the output activation of the winning neurode from the character classifier network. The second method uses an additional neural network trained on all outputs from the character classifier network to accept or reject assigned classifications. The neural network rejection method was expected to perform with greater accuracy than the scalar threshold method, but this was not supported by the test results presented. The scalar threshold method, even though arbitrary, is shown to be a viable reject mechanism for use with neural network character classifiers. Upon studying the performance of the neural network rejection method, analyses show that the two neural networks, the character classifier network and the rejection network, perform very similarly. This can be explained by the strong non-linear function of the character classifier network which effectively removes most of the correlation between character accuracy and all activations other than the winning activation. This suggests that any effective rejection network must receive information from the system which has not been filtered through the non-linear classifier.

  16. A renaissance of neural networks in drug discovery.

    Science.gov (United States)

    Baskin, Igor I; Winkler, David; Tetko, Igor V

    2016-08-01

    Neural networks are becoming a very popular method for solving machine learning and artificial intelligence problems. The variety of neural network types and their application to drug discovery requires expert knowledge to choose the most appropriate approach. In this review, the authors discuss traditional and newly emerging neural network approaches to drug discovery. Their focus is on backpropagation neural networks and their variants, self-organizing maps and associated methods, and a relatively new technique, deep learning. The most important technical issues are discussed including overfitting and its prevention through regularization, ensemble and multitask modeling, model interpretation, and estimation of applicability domain. Different aspects of using neural networks in drug discovery are considered: building structure-activity models with respect to various targets; predicting drug selectivity, toxicity profiles, ADMET and physicochemical properties; characteristics of drug-delivery systems and virtual screening. Neural networks continue to grow in importance for drug discovery. Recent developments in deep learning suggests further improvements may be gained in the analysis of large chemical data sets. It's anticipated that neural networks will be more widely used in drug discovery in the future, and applied in non-traditional areas such as drug delivery systems, biologically compatible materials, and regenerative medicine.

  17. Hybrid neural network bushing model for vehicle dynamics simulation

    Energy Technology Data Exchange (ETDEWEB)

    Sohn, Jeong Hyun [Pukyong National University, Busan (Korea, Republic of); Lee, Seung Kyu [Hyosung Corporation, Changwon (Korea, Republic of); Yoo, Wan Suk [Pusan National University, Busan (Korea, Republic of)

    2008-12-15

    Although the linear model was widely used for the bushing model in vehicle suspension systems, it could not express the nonlinear characteristics of bushing in terms of the amplitude and the frequency. An artificial neural network model was suggested to consider the hysteretic responses of bushings. This model, however, often diverges due to the uncertainties of the neural network under the unexpected excitation inputs. In this paper, a hybrid neural network bushing model combining linear and neural network is suggested. A linear model was employed to represent linear stiffness and damping effects, and the artificial neural network algorithm was adopted to take into account the hysteretic responses. A rubber test was performed to capture bushing characteristics, where sine excitation with different frequencies and amplitudes is applied. Random test results were used to update the weighting factors of the neural network model. It is proven that the proposed model has more robust characteristics than a simple neural network model under step excitation input. A full car simulation was carried out to verify the proposed bushing models. It was shown that the hybrid model results are almost identical to the linear model under several maneuvers

  18. A neural network applied to estimate Burr XII distribution parameters

    Energy Technology Data Exchange (ETDEWEB)

    Abbasi, B., E-mail: b.abbasi@gmail.co [Department of Industrial Engineering, Sharif University of Technology, Tehran (Iran, Islamic Republic of); Hosseinifard, S.Z. [Department of Statistics and Operations Research, RMIT University, Melbourne (Australia); Coit, D.W. [Department of Industrial and System Engineering, Rutgers University, Piscataway, NJ (United States)

    2010-06-15

    The Burr XII distribution can closely approximate many other well-known probability density functions such as the normal, gamma, lognormal, exponential distributions as well as Pearson type I, II, V, VII, IX, X, XII families of distributions. Considering a wide range of shape and scale parameters of the Burr XII distribution, it can have an important role in reliability modeling, risk analysis and process capability estimation. However, estimating parameters of the Burr XII distribution can be a complicated task and the use of conventional methods such as maximum likelihood estimation (MLE) and moment method (MM) is not straightforward. Some tables to estimate Burr XII parameters have been provided by Burr (1942) but they are not adequate for many purposes or data sets. Burr tables contain specific values of skewness and kurtosis and their corresponding Burr XII parameters. Using interpolation or extrapolation to estimate other values may provide inappropriate estimations. In this paper, we present a neural network to estimate Burr XII parameters for different values of skewness and kurtosis as inputs. A trained network is presented, and one can use it without previous knowledge about neural networks to estimate Burr XII distribution parameters. Accurate estimation of the Burr parameters is an extension of simulation studies.

  19. Stabilizing patterns in time: Neural network approach.

    Science.gov (United States)

    Ben-Shushan, Nadav; Tsodyks, Misha

    2017-12-01

    Recurrent and feedback networks are capable of holding dynamic memories. Nonetheless, training a network for that task is challenging. In order to do so, one should face non-linear propagation of errors in the system. Small deviations from the desired dynamics due to error or inherent noise might have a dramatic effect in the future. A method to cope with these difficulties is thus needed. In this work we focus on recurrent networks with linear activation functions and binary output unit. We characterize its ability to reproduce a temporal sequence of actions over its output unit. We suggest casting the temporal learning problem to a perceptron problem. In the discrete case a finite margin appears, providing the network, to some extent, robustness to noise, for which it performs perfectly (i.e. producing a desired sequence for an arbitrary number of cycles flawlessly). In the continuous case the margin approaches zero when the output unit changes its state, hence the network is only able to reproduce the sequence with slight jitters. Numerical simulation suggest that in the discrete time case, the longest sequence that can be learned scales, at best, as square root of the network size. A dramatic effect occurs when learning several short sequences in parallel, that is, their total length substantially exceeds the length of the longest single sequence the network can learn. This model easily generalizes to an arbitrary number of output units, which boost its performance. This effect is demonstrated by considering two practical examples for sequence learning. This work suggests a way to overcome stability problems for training recurrent networks and further quantifies the performance of a network under the specific learning scheme.

  20. Convergence behavior of delayed discrete cellular neural network without periodic coefficients.

    Science.gov (United States)

    Wang, Jinling; Jiang, Haijun; Hu, Cheng; Ma, Tianlong

    2014-05-01

    In this paper, we study convergence behaviors of delayed discrete cellular neural networks without periodic coefficients. Some sufficient conditions are derived to ensure all solutions of delayed discrete cellular neural network without periodic coefficients converge to a periodic function, by applying mathematical analysis techniques and the properties of inequalities. Finally, some examples showing the effectiveness of the provided criterion are given. Copyright © 2014 Elsevier Ltd. All rights reserved.