WorldWideScience

Sample records for neural network architectures

  1. Improving neural network performance on SIMD architectures

    Science.gov (United States)

    Limonova, Elena; Ilin, Dmitry; Nikolaev, Dmitry

    2015-12-01

    Neural network calculations for the image recognition problems can be very time consuming. In this paper we propose three methods of increasing neural network performance on SIMD architectures. The usage of SIMD extensions is a way to speed up neural network processing available for a number of modern CPUs. In our experiments, we use ARM NEON as SIMD architecture example. The first method deals with half float data type for matrix computations. The second method describes fixed-point data type for the same purpose. The third method considers vectorized activation functions implementation. For each method we set up a series of experiments for convolutional and fully connected networks designed for image recognition task.

  2. An Evolutionary Optimization Framework for Neural Networks and Neuromorphic Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Schuman, Catherine D [ORNL; Plank, James [University of Tennessee (UT); Disney, Adam [University of Tennessee (UT); Reynolds, John [University of Tennessee (UT)

    2016-01-01

    As new neural network and neuromorphic architectures are being developed, new training methods that operate within the constraints of the new architectures are required. Evolutionary optimization (EO) is a convenient training method for new architectures. In this work, we review a spiking neural network architecture and a neuromorphic architecture, and we describe an EO training framework for these architectures. We present the results of this training framework on four classification data sets and compare those results to other neural network and neuromorphic implementations. We also discuss how this EO framework may be extended to other architectures.

  3. A hardware implementation of neural network with modified HANNIBAL architecture

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Bum youb; Chung, Duck Jin [Inha University, Inchon (Korea, Republic of)

    1996-03-01

    A digital hardware architecture for artificial neural network with learning capability is described in this paper. It is a modified hardware architecture known as HANNIBAL(Hardware Architecture for Neural Networks Implementing Back propagation Algorithm Learning). For implementing an efficient neural network hardware, we analyzed various type of multiplier which is major function block of neuro-processor cell. With this result, we design a efficient digital neural network hardware using serial/parallel multiplier, and test the operation. We also analyze the hardware efficiency with logic level simulation. (author). 14 refs., 10 figs., 3 tabs.

  4. Hybrid Neural Network Architecture for On-Line Learning

    CERN Document Server

    Chen, Yuhua; Wang, Lei

    2008-01-01

    Approaches to machine intelligence based on brain models have stressed the use of neural networks for generalization. Here we propose the use of a hybrid neural network architecture that uses two kind of neural networks simultaneously: (i) a surface learning agent that quickly adapt to new modes of operation; and, (ii) a deep learning agent that is very accurate within a specific regime of operation. The two networks of the hybrid architecture perform complementary functions that improve the overall performance. The performance of the hybrid architecture has been compared with that of back-propagation perceptrons and the CC and FC networks for chaotic time-series prediction, the CATS benchmark test, and smooth function approximation. It has been shown that the hybrid architecture provides a superior performance based on the RMS error criterion.

  5. Optimizing Neural Network Architectures Using Generalization Error Estimators

    DEFF Research Database (Denmark)

    Larsen, Jan

    1994-01-01

    This paper addresses the optimization of neural network architectures. It is suggested to optimize the architecture by selecting the model with minimal estimated averaged generalization error. We consider a least-squares (LS) criterion for estimating neural network models, i.e., the associated...... neural network applications, it is impossible to suggest a perfect model, and consequently the ability to handle incomplete models is urgent. A concise derivation of the GEN-estimator is provided, and its qualities are demonstrated by comparative numerical studies...

  6. Optimizing Neural Network Architectures Using Generalization Error Estimators

    DEFF Research Database (Denmark)

    Larsen, Jan

    1994-01-01

    This paper addresses the optimization of neural network architectures. It is suggested to optimize the architecture by selecting the model with minimal estimated averaged generalization error. We consider a least-squares (LS) criterion for estimating neural network models, i.e., the associated...... neural network applications, it is impossible to suggest a perfect model, and consequently the ability to handle incomplete models is urgent. A concise derivation of the GEN-estimator is provided, and its qualities are demonstrated by comparative numerical studies...

  7. An introduction to bio-inspired artificial neural network architectures.

    Science.gov (United States)

    Fasel, B

    2003-03-01

    In this introduction to artificial neural networks we attempt to give an overview of the most important types of neural networks employed in engineering and explain shortly how they operate and also how they relate to biological neural networks. The focus will mainly be on bio-inspired artificial neural network architectures and specifically to neo-perceptions. The latter belong to the family of convolutional neural networks. Their topology is somewhat similar to the one of the human visual cortex and they are based on receptive fields that allow, in combination with sub-sampling layers, for an improved robustness with regard to local spatial distortions. We demonstrate the application of artificial neural networks to face analysis--a domain we human beings are particularly good at, yet which poses great difficulties for digital computers running deterministic software programs.

  8. Architecture Analysis of an FPGA-Based Hopfield Neural Network

    Directory of Open Access Journals (Sweden)

    Miguel Angelo de Abreu de Sousa

    2014-01-01

    Full Text Available Interconnections between electronic circuits and neural computation have been a strongly researched topic in the machine learning field in order to approach several practical requirements, including decreasing training and operation times in high performance applications and reducing cost, size, and energy consumption for autonomous or embedded developments. Field programmable gate array (FPGA hardware shows some inherent features typically associated with neural networks, such as, parallel processing, modular executions, and dynamic adaptation, and works on different types of FPGA-based neural networks were presented in recent years. This paper aims to address different aspects of architectural characteristics analysis on a Hopfield Neural Network implemented in FPGA, such as maximum operating frequency and chip-area occupancy according to the network capacity. Also, the FPGA implementation methodology, which does not employ multipliers in the architecture developed for the Hopfield neural model, is presented, in detail.

  9. Markovian architectural bias of recurrent neural networks.

    Science.gov (United States)

    Tino, Peter; Cernanský, Michal; Benusková, Lubica

    2004-01-01

    In this paper, we elaborate upon the claim that clustering in the recurrent layer of recurrent neural networks (RNNs) reflects meaningful information processing states even prior to training [1], [2]. By concentrating on activation clusters in RNNs, while not throwing away the continuous state space network dynamics, we extract predictive models that we call neural prediction machines (NPMs). When RNNs with sigmoid activation functions are initialized with small weights (a common technique in the RNN community), the clusters of recurrent activations emerging prior to training are indeed meaningful and correspond to Markov prediction contexts. In this case, the extracted NPMs correspond to a class of Markov models, called variable memory length Markov models (VLMMs). In order to appreciate how much information has really been induced during the training, the RNN performance should always be compared with that of VLMMs and NPMs extracted before training as the "null" base models. Our arguments are supported by experiments on a chaotic symbolic sequence and a context-free language with a deep recursive structure. Index Terms-Complex symbolic sequences, information latching problem, iterative function systems, Markov models, recurrent neural networks (RNNs).

  10. Convolutional neural network architectures for predicting DNA–protein binding

    Science.gov (United States)

    Zeng, Haoyang; Edwards, Matthew D.; Liu, Ge; Gifford, David K.

    2016-01-01

    Motivation: Convolutional neural networks (CNN) have outperformed conventional methods in modeling the sequence specificity of DNA–protein binding. Yet inappropriate CNN architectures can yield poorer performance than simpler models. Thus an in-depth understanding of how to match CNN architecture to a given task is needed to fully harness the power of CNNs for computational biology applications. Results: We present a systematic exploration of CNN architectures for predicting DNA sequence binding using a large compendium of transcription factor datasets. We identify the best-performing architectures by varying CNN width, depth and pooling designs. We find that adding convolutional kernels to a network is important for motif-based tasks. We show the benefits of CNNs in learning rich higher-order sequence features, such as secondary motifs and local sequence context, by comparing network performance on multiple modeling tasks ranging in difficulty. We also demonstrate how careful construction of sequence benchmark datasets, using approaches that control potentially confounding effects like positional or motif strength bias, is critical in making fair comparisons between competing methods. We explore how to establish the sufficiency of training data for these learning tasks, and we have created a flexible cloud-based framework that permits the rapid exploration of alternative neural network architectures for problems in computational biology. Availability and Implementation: All the models analyzed are available at http://cnn.csail.mit.edu. Contact: gifford@mit.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307608

  11. Biologically relevant neural network architectures for support vector machines.

    Science.gov (United States)

    Jändel, Magnus

    2014-01-01

    Neural network architectures that implement support vector machines (SVM) are investigated for the purpose of modeling perceptual one-shot learning in biological organisms. A family of SVM algorithms including variants of maximum margin, 1-norm, 2-norm and ν-SVM is considered. SVM training rules adapted for neural computation are derived. It is found that competitive queuing memory (CQM) is ideal for storing and retrieving support vectors. Several different CQM-based neural architectures are examined for each SVM algorithm. Although most of the sixty-four scanned architectures are unconvincing for biological modeling four feasible candidates are found. The seemingly complex learning rule of a full ν-SVM implementation finds a particularly simple and natural implementation in bisymmetric architectures. Since CQM-like neural structures are thought to encode skilled action sequences and bisymmetry is ubiquitous in motor systems it is speculated that trainable pattern recognition in low-level perception has evolved as an internalized motor programme. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Learning, memory, and the role of neural network architecture.

    Science.gov (United States)

    Hermundstad, Ann M; Brown, Kevin S; Bassett, Danielle S; Carlson, Jean M

    2011-06-01

    The performance of information processing systems, from artificial neural networks to natural neuronal ensembles, depends heavily on the underlying system architecture. In this study, we compare the performance of parallel and layered network architectures during sequential tasks that require both acquisition and retention of information, thereby identifying tradeoffs between learning and memory processes. During the task of supervised, sequential function approximation, networks produce and adapt representations of external information. Performance is evaluated by statistically analyzing the error in these representations while varying the initial network state, the structure of the external information, and the time given to learn the information. We link performance to complexity in network architecture by characterizing local error landscape curvature. We find that variations in error landscape structure give rise to tradeoffs in performance; these include the ability of the network to maximize accuracy versus minimize inaccuracy and produce specific versus generalizable representations of information. Parallel networks generate smooth error landscapes with deep, narrow minima, enabling them to find highly specific representations given sufficient time. While accurate, however, these representations are difficult to generalize. In contrast, layered networks generate rough error landscapes with a variety of local minima, allowing them to quickly find coarse representations. Although less accurate, these representations are easily adaptable. The presence of measurable performance tradeoffs in both layered and parallel networks has implications for understanding the behavior of a wide variety of natural and artificial learning systems.

  13. Learning, memory, and the role of neural network architecture.

    Directory of Open Access Journals (Sweden)

    Ann M Hermundstad

    2011-06-01

    Full Text Available The performance of information processing systems, from artificial neural networks to natural neuronal ensembles, depends heavily on the underlying system architecture. In this study, we compare the performance of parallel and layered network architectures during sequential tasks that require both acquisition and retention of information, thereby identifying tradeoffs between learning and memory processes. During the task of supervised, sequential function approximation, networks produce and adapt representations of external information. Performance is evaluated by statistically analyzing the error in these representations while varying the initial network state, the structure of the external information, and the time given to learn the information. We link performance to complexity in network architecture by characterizing local error landscape curvature. We find that variations in error landscape structure give rise to tradeoffs in performance; these include the ability of the network to maximize accuracy versus minimize inaccuracy and produce specific versus generalizable representations of information. Parallel networks generate smooth error landscapes with deep, narrow minima, enabling them to find highly specific representations given sufficient time. While accurate, however, these representations are difficult to generalize. In contrast, layered networks generate rough error landscapes with a variety of local minima, allowing them to quickly find coarse representations. Although less accurate, these representations are easily adaptable. The presence of measurable performance tradeoffs in both layered and parallel networks has implications for understanding the behavior of a wide variety of natural and artificial learning systems.

  14. A modular architecture for transparent computation in recurrent neural networks.

    Science.gov (United States)

    Carmantini, Giovanni S; Beim Graben, Peter; Desroches, Mathieu; Rodrigues, Serafim

    2017-01-01

    Computation is classically studied in terms of automata, formal languages and algorithms; yet, the relation between neural dynamics and symbolic representations and operations is still unclear in traditional eliminative connectionism. Therefore, we suggest a unique perspective on this central issue, to which we would like to refer as transparent connectionism, by proposing accounts of how symbolic computation can be implemented in neural substrates. In this study we first introduce a new model of dynamics on a symbolic space, the versatile shift, showing that it supports the real-time simulation of a range of automata. We then show that the Gödelization of versatile shifts defines nonlinear dynamical automata, dynamical systems evolving on a vectorial space. Finally, we present a mapping between nonlinear dynamical automata and recurrent artificial neural networks. The mapping defines an architecture characterized by its granular modularity, where data, symbolic operations and their control are not only distinguishable in activation space, but also spatially localizable in the network itself, while maintaining a distributed encoding of symbolic representations. The resulting networks simulate automata in real-time and are programmed directly, in the absence of network training. To discuss the unique characteristics of the architecture and their consequences, we present two examples: (i) the design of a Central Pattern Generator from a finite-state locomotive controller, and (ii) the creation of a network simulating a system of interactive automata that supports the parsing of garden-path sentences as investigated in psycholinguistics experiments.

  15. Modular neural tile architecture for compact embedded hardware spiking neural network

    NARCIS (Netherlands)

    Pande, Sandeep; Morgan, Fearghal; Cawley, Seamus; Bruintjes, Tom; Smit, Gerard; McGinley, Brian; Carrillo, Snaider; Harkin, Jim; McDaid, Liam

    2013-01-01

    Biologically-inspired packet switched network on chip (NoC) based hardware spiking neural network (SNN) architectures have been proposed as an embedded computing platform for classification, estimation and control applications. Storage of large synaptic connectivity (SNN topology) information in SNN

  16. SELECTING NEURAL NETWORK ARCHITECTURE FOR INVESTMENT PROFITABILITY PREDICTIONS

    Directory of Open Access Journals (Sweden)

    Marijana Zekić-Sušac

    2012-07-01

    Full Text Available After production and operations, finance and investments are one of the mostfrequent areas of neural network applications in business. The lack of standardizedparadigms that can determine the efficiency of certain NN architectures in a particularproblem domain is still present. The selection of NN architecture needs to take intoconsideration the type of the problem, the nature of the data in the model, as well as somestrategies based on result comparison. The paper describes previous research in that areaand suggests a forward strategy for selecting best NN algorithm and structure. Since thestrategy includes both parameter-based and variable-based testings, it can be used forselecting NN architectures as well as for extracting models. The backpropagation, radialbasis,modular, LVQ and probabilistic neural network algorithms were used on twoindependent sets: stock market and credit scoring data. The results show that neuralnetworks give better accuracy comparing to multiple regression and logistic regressionmodels. Since it is model-independant, the strategy can be used by researchers andprofessionals in other areas of application.

  17. Modelling Spiking Neural Network from the Architecture Evaluation Perspective

    Institute of Scientific and Technical Information of China (English)

    Yu Ji; You-Hui Zhang; Wei-Min Zheng

    2016-01-01

    The brain-inspired spiking neural network (SNN) computing paradigm offers the potential for low-power and scalable computing, suited to many intelligent tasks that conventional computational systems find difficult. On the other hand, NoC (network-on-chips) based very large scale integration (VLSI) systems have been widely used to mimic neuro-biological architectures (including SNNs). This paper proposes an evaluation methodology for SNN applications from the aspect of micro-architecture. First, we extract accurate SNN models from existing simulators of neural systems. Second, a cycle-accurate NoC simulator is implemented to execute the aforementioned SNN applications to get timing and energy-consumption information. We believe this method not only benefits the exploration of NoC design space but also bridges the gap between applications (especially those from the neuroscientists’ community) and neuromorphic hardware. Based on the method, we have evaluated some typical SNNs in terms of timing and energy. The method is valuable for the development of neuromorphic hardware and applications.

  18. Modeling cognitive and emotional processes: a novel neural network architecture.

    Science.gov (United States)

    Khashman, Adnan

    2010-12-01

    In our continuous attempts to model natural intelligence and emotions in machine learning, many research works emerge with different methods that are often driven by engineering concerns and have the common goal of modeling human perception in machines. This paper aims to go further in that direction by investigating the integration of emotion at the structural level of cognitive systems using the novel emotional DuoNeural Network (DuoNN). This network has hidden layer DuoNeurons, where each has two embedded neurons: a dorsal neuron and a ventral neuron for cognitive and emotional data processing, respectively. When input visual stimuli are presented to the DuoNN, the dorsal cognitive neurons process local features while the ventral emotional neurons process the entire pattern. We present the computational model and the learning algorithm of the DuoNN, the input information-cognitive and emotional-parallel streaming method, and a comparison between the DuoNN and a recently developed emotional neural network. Experimental results show that the DuoNN architecture, configuration, and the additional emotional information processing, yield higher recognition rates and faster learning and decision making.

  19. Deep neural network architectures for forecasting analgesic response.

    Science.gov (United States)

    Nickerson, Paul; Tighe, Patrick; Shickel, Benjamin; Rashidi, Parisa

    2016-08-01

    Response to prescribed analgesic drugs varies between individuals, and choosing the right drug/dose often involves a lengthy, iterative process of trial and error. Furthermore, a significant portion of patients experience adverse events such as post-operative urinary retention (POUR) during inpatient management of acute postoperative pain. To better forecast analgesic responses, we compared conventional machine learning methods with modern neural network architectures to gauge their effectiveness at forecasting temporal patterns of postoperative pain and analgesic use, as well as predicting the risk of POUR. Our results indicate that simpler machine learning approaches might offer superior results; however, all of these techniques may play a promising role for developing smarter post-operative pain management strategies.

  20. Quantum perceptron over a field and neural network architecture selection in a quantum computer.

    Science.gov (United States)

    da Silva, Adenilton José; Ludermir, Teresa Bernarda; de Oliveira, Wilson Rosa

    2016-04-01

    In this work, we propose a quantum neural network named quantum perceptron over a field (QPF). Quantum computers are not yet a reality and the models and algorithms proposed in this work cannot be simulated in actual (or classical) computers. QPF is a direct generalization of a classical perceptron and solves some drawbacks found in previous models of quantum perceptrons. We also present a learning algorithm named Superposition based Architecture Learning algorithm (SAL) that optimizes the neural network weights and architectures. SAL searches for the best architecture in a finite set of neural network architectures with linear time over the number of patterns in the training set. SAL is the first learning algorithm to determine neural network architectures in polynomial time. This speedup is obtained by the use of quantum parallelism and a non-linear quantum operator.

  1. TUTORIAL: Neural blackboard architectures: the realization of compositionality and systematicity in neural networks

    Science.gov (United States)

    de Kamps, Marc; van der Velde, Frank

    2006-03-01

    In this paper, we will first introduce the notions of systematicity and combinatorial productivity and we will argue that these notions are essential for human cognition and probably for every agent that needs to be able to deal with novel, unexpected situations in a complex environment. Agents that use compositional representations are faced with the so-called binding problem and the question of how to create neural network architectures that can deal with it is essential for understanding higher level cognition. Moreover, an architecture that can solve this problem is likely to scale better with problem size than other neural network architectures. Then, we will discuss object-based attention. The influence of spatial attention is well known, but there is solid evidence for object-based attention as well. We will discuss experiments that demonstrate object-based attention and will discuss a model that can explain the data of these experiments very well. The model strongly suggests that this mode of attention provides a neural basis for parallel search. Next, we will show a model for binding in visual cortex. This model is based on a so-called neural blackboard architecture, where higher cortical areas act as processors, specialized for specific features of a visual stimulus, and lower visual areas act as a blackboard for communication between these processors. This implies that lower visual areas are involved in more than bottom-up visual processing, something which already was apparent from the large number of recurrent connections from higher to lower visual areas. This model identifies a specific role for these feedback connections. Finally, we will discuss the experimental evidence that exists for this architecture. .

  2. Neural blackboard architectures: the realization of compositionality and systematicity in neural networks.

    Science.gov (United States)

    de Kamps, Marc; van der Velde, Frank

    2006-03-01

    In this paper, we will first introduce the notions of systematicity and combinatorial productivity and we will argue that these notions are essential for human cognition and probably for every agent that needs to be able to deal with novel, unexpected situations in a complex environment. Agents that use compositional representations are faced with the so-called binding problem and the question of how to create neural network architectures that can deal with it is essential for understanding higher level cognition. Moreover, an architecture that can solve this problem is likely to scale better with problem size than other neural network architectures. Then, we will discuss object-based attention. The influence of spatial attention is well known, but there is solid evidence for object-based attention as well. We will discuss experiments that demonstrate object-based attention and will discuss a model that can explain the data of these experiments very well. The model strongly suggests that this mode of attention provides a neural basis for parallel search. Next, we will show a model for binding in visual cortex. This model is based on a so-called neural blackboard architecture, where higher cortical areas act as processors, specialized for specific features of a visual stimulus, and lower visual areas act as a blackboard for communication between these processors. This implies that lower visual areas are involved in more than bottom-up visual processing, something which already was apparent from the large number of recurrent connections from higher to lower visual areas. This model identifies a specific role for these feedback connections. Finally, we will discuss the experimental evidence that exists for this architecture.

  3. An integrated architecture of adaptive neural network control for dynamic systems

    Energy Technology Data Exchange (ETDEWEB)

    Ke, Liu; Tokar, R.; Mcvey, B.

    1994-07-01

    In this study, an integrated neural network control architecture for nonlinear dynamic systems is presented. Most of the recent emphasis in the neural network control field has no error feedback as the control input which rises the adaptation problem. The integrated architecture in this paper combines feed forward control and error feedback adaptive control using neural networks. The paper reveals the different internal functionality of these two kinds of neural network controllers for certain input styles, e.g., state feedback and error feedback. Feed forward neural network controllers with state feedback establish fixed control mappings which can not adapt when model uncertainties present. With error feedbacks, neural network controllers learn the slopes or the gains respecting to the error feedbacks, which are error driven adaptive control systems. The results demonstrate that the two kinds of control scheme can be combined to realize their individual advantages. Testing with disturbances added to the plant shows good tracking and adaptation.

  4. A neural network architecture for implementation of expert systems for real time monitoring

    Science.gov (United States)

    Ramamoorthy, P. A.

    1991-01-01

    Since neural networks have the advantages of massive parallelism and simple architecture, they are good tools for implementing real time expert systems. In a rule based expert system, the antecedents of rules are in the conjunctive or disjunctive form. We constructed a multilayer feedforward type network in which neurons represent AND or OR operations of rules. Further, we developed a translator which can automatically map a given rule base into the network. Also, we proposed a new and powerful yet flexible architecture that combines the advantages of both fuzzy expert systems and neural networks. This architecture uses the fuzzy logic concepts to separate input data domains into several smaller and overlapped regions. Rule-based expert systems for time critical applications using neural networks, the automated implementation of rule-based expert systems with neural nets, and fuzzy expert systems vs. neural nets are covered.

  5. Neural network architecture for cognitive navigation in dynamic environments.

    Science.gov (United States)

    Villacorta-Atienza, José Antonio; Makarov, Valeri A

    2013-12-01

    Navigation in time-evolving environments with moving targets and obstacles requires cognitive abilities widely demonstrated by even simplest animals. However, it is a long-standing challenging problem for artificial agents. Cognitive autonomous robots coping with this problem must solve two essential tasks: 1) understand the environment in terms of what may happen and how I can deal with this and 2) learn successful experiences for their further use in an automatic subconscious way. The recently introduced concept of compact internal representation (CIR) provides the ground for both the tasks. CIR is a specific cognitive map that compacts time-evolving situations into static structures containing information necessary for navigation. It belongs to the class of global approaches, i.e., it finds trajectories to a target when they exist but also detects situations when no solution can be found. Here we extend the concept of situations with mobile targets. Then using CIR as a core, we propose a closed-loop neural network architecture consisting of conscious and subconscious pathways for efficient decision-making. The conscious pathway provides solutions to novel situations if the default subconscious pathway fails to guide the agent to a target. Employing experiments with roving robots and numerical simulations, we show that the proposed architecture provides the robot with cognitive abilities and enables reliable and flexible navigation in realistic time-evolving environments. We prove that the subconscious pathway is robust against uncertainty in the sensory information. Thus if a novel situation is similar but not identical to the previous experience (because of, e.g., noisy perception) then the subconscious pathway is able to provide an effective solution.

  6. Seafloor classification using echo- waveforms: A method employing hybrid neural network architecture

    Digital Repository Service at National Institute of Oceanography (India)

    Chakraborty, B.; Mahale, V.; DeSouza, C.; Das, P.

    This letter presents seafloor classification study results of a hybrid artificial neural network architecture known as learning vector quantization. Single beam echo-sounding backscatter waveform data from three different seafloors of the western...

  7. Dynamic Adaptive Neural Network Arrays: A Neuromorphic Architecture

    Energy Technology Data Exchange (ETDEWEB)

    Disney, Adam [University of Tennessee (UT); Reynolds, John [University of Tennessee (UT)

    2015-01-01

    Dynamic Adaptive Neural Network Array (DANNA) is a neuromorphic hardware implementation. It differs from most other neuromorphic projects in that it allows for programmability of structure, and it is trained or designed using evolutionary optimization. This paper describes the DANNA structure, how DANNA is trained using evolutionary optimization, and an application of DANNA to a very simple classification task.

  8. Comparison of different artificial neural network architectures in modeling of Chlorella sp. flocculation.

    Science.gov (United States)

    Zenooz, Alireza Moosavi; Ashtiani, Farzin Zokaee; Ranjbar, Reza; Nikbakht, Fatemeh; Bolouri, Oberon

    2017-07-03

    Biodiesel production from microalgae feedstock should be performed after growth and harvesting of the cells, and the most feasible method for harvesting and dewatering of microalgae is flocculation. Flocculation modeling can be used for evaluation and prediction of its performance under different affective parameters. However, the modeling of flocculation in microalgae is not simple and has not performed yet, under all experimental conditions, mostly due to different behaviors of microalgae cells during the process under different flocculation conditions. In the current study, the modeling of microalgae flocculation is studied with different neural network architectures. Microalgae species, Chlorella sp., was flocculated with ferric chloride under different conditions and then the experimental data modeled using artificial neural network. Neural network architectures of multilayer perceptron (MLP) and radial basis function architectures, failed to predict the targets successfully, though, modeling was effective with ensemble architecture of MLP networks. Comparison between the performances of the ensemble and each individual network explains the ability of the ensemble architecture in microalgae flocculation modeling.

  9. Acoustic characterization of seafloor sediment employing a hybrid method of neural network architecture and fuzzy algorithm

    Digital Repository Service at National Institute of Oceanography (India)

    De, C.; Chakraborty, B.

    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 6, NO. 4, OCTOBER 2009 743 Acoustic Characterization of Seafloor Sediment Employing a Hybrid Method of Neural Network Architecture and Fuzzy Algorithm Chanchal De and Bishwajit Chakraborty Abstract... backscatter data [11]–[13] and side-scan sonar images [14]–[16] have been demonstrated for seafloor classification. In this letter, seafloor sediment is characterized using an unsupervised architecture called Kohonen’s self-organizing Manuscript received...

  10. Framewise phoneme classification with bidirectional LSTM and other neural network architectures.

    Science.gov (United States)

    Graves, Alex; Schmidhuber, Jürgen

    2005-01-01

    In this paper, we present bidirectional Long Short Term Memory (LSTM) networks, and a modified, full gradient version of the LSTM learning algorithm. We evaluate Bidirectional LSTM (BLSTM) and several other network architectures on the benchmark task of framewise phoneme classification, using the TIMIT database. Our main findings are that bidirectional networks outperform unidirectional ones, and Long Short Term Memory (LSTM) is much faster and also more accurate than both standard Recurrent Neural Nets (RNNs) and time-windowed Multilayer Perceptrons (MLPs). Our results support the view that contextual information is crucial to speech processing, and suggest that BLSTM is an effective architecture with which to exploit it.

  11. OPTIMIZATION OF NEURAL NETWORK ARCHITECTURE FOR BIOMECHANIC CLASSIFICATION TASKS WITH ELECTROMYOGRAM INPUTS

    Directory of Open Access Journals (Sweden)

    Alayna Kennedy

    2016-09-01

    Full Text Available Electromyogram signals (EMGs contain valuable information that can be used in man-machine interfacing between human users and myoelectric prosthetic devices. However, EMG signals are complicated and prove difficult to analyze due to physiological noise and other issues. Computational intelligence and machine learning techniques, such as artificial neural networks (ANNs, serve as powerful tools for analyzing EMG signals and creating optimal myoelectric control schemes for prostheses. This research examines the performance of four different neural network architectures (feedforward, recurrent, counter propagation, and self organizing map that were tasked with classifying walking speed when given EMG inputs from 14 different leg muscles. Experiments conducted on the data set suggest that self organizing map neural networks are capable of classifying walking speed with greater than 99% accuracy.

  12. Architecture and biological applications of artificial neural networks: a tuberculosis perspective.

    Science.gov (United States)

    Darsey, Jerry A; Griffin, William O; Joginipelli, Sravanthi; Melapu, Venkata Kiran

    2015-01-01

    Advancement of science and technology has prompted researchers to develop new intelligent systems that can solve a variety of problems such as pattern recognition, prediction, and optimization. The ability of the human brain to learn in a fashion that tolerates noise and error has attracted many researchers and provided the starting point for the development of artificial neural networks: the intelligent systems. Intelligent systems can acclimatize to the environment or data and can maximize the chances of success or improve the efficiency of a search. Due to massive parallelism with large numbers of interconnected processers and their ability to learn from the data, neural networks can solve a variety of challenging computational problems. Neural networks have the ability to derive meaning from complicated and imprecise data; they are used in detecting patterns, and trends that are too complex for humans, or other computer systems. Solutions to the toughest problems will not be found through one narrow specialization; therefore we need to combine interdisciplinary approaches to discover the solutions to a variety of problems. Many researchers in different disciplines such as medicine, bioinformatics, molecular biology, and pharmacology have successfully applied artificial neural networks. This chapter helps the reader in understanding the basics of artificial neural networks, their applications, and methodology; it also outlines the network learning process and architecture. We present a brief outline of the application of neural networks to medical diagnosis, drug discovery, gene identification, and protein structure prediction. We conclude with a summary of the results from our study on tuberculosis data using neural networks, in diagnosing active tuberculosis, and predicting chronic vs. infiltrative forms of tuberculosis.

  13. Architectural style classification of Mexican historical buildings using deep convolutional neural networks and sparse features

    Science.gov (United States)

    Obeso, Abraham Montoya; Benois-Pineau, Jenny; Acosta, Alejandro Álvaro Ramirez; Vázquez, Mireya Saraí García

    2017-01-01

    We propose a convolutional neural network to classify images of buildings using sparse features at the network's input in conjunction with primary color pixel values. As a result, a trained neuronal model is obtained to classify Mexican buildings in three classes according to the architectural styles: prehispanic, colonial, and modern with an accuracy of 88.01%. The problem of poor information in a training dataset is faced due to the unequal availability of cultural material. We propose a data augmentation and oversampling method to solve this problem. The results are encouraging and allow for prefiltering of the content in the search tasks.

  14. APPLICATION OF ARCHITECTURE-BASED NEURAL NETWORKS IN MODELING AND PARAMETER OPTIMIZATION OF HYDRAULIC BUMPER

    Institute of Scientific and Technical Information of China (English)

    Yang Haiwei; Zhan Yongqi; Qiao Junwei; Shi Guanglin

    2003-01-01

    The dynamic working process of 52SFZ-140-207B type of hydraulic bumper is analyzed. The modeling method using architecture-based neural networks is introduced. Using this modeling method, the dynamic model of the hydraulic bumper is established; Based on this model the structural parameters of the hydraulic bumper are optimized with Genetic algorithm. The result shows that the performance of the dynamic model is close to that of the hydraulic bumper, and the dynamic performance of the hydraulic bumper is improved through parameter optimization.

  15. A Neural Network Architecture For Rapid Model Indexing In Computer Vision Systems

    Science.gov (United States)

    Pawlicki, Ted

    1988-03-01

    Models of objects stored in memory have been shown to be useful for guiding the processing of computer vision systems. A major consideration in such systems, however, is how stored models are initially accessed and indexed by the system. As the number of stored models increases, the time required to search memory for the correct model becomes high. Parallel distributed, connectionist, neural networks' have been shown to have appealing content addressable memory properties. This paper discusses an architecture for efficient storage and reference of model memories stored as stable patterns of activity in a parallel, distributed, connectionist, neural network. The emergent properties of content addressability and resistance to noise are exploited to perform indexing of the appropriate object centered model from image centered primitives. The system consists of three network modules each of which represent information relative to a different frame of reference. The model memory network is a large state space vector where fields in the vector correspond to ordered component objects and relative, object based spatial relationships between the component objects. The component assertion network represents evidence about the existence of object primitives in the input image. It establishes local frames of reference for object primitives relative to the image based frame of reference. The spatial relationship constraint network is an intermediate representation which enables the association between the object based and the image based frames of reference. This intermediate level represents information about possible object orderings and establishes relative spatial relationships from the image based information in the component assertion network below. It is also constrained by the lawful object orderings in the model memory network above. The system design is consistent with current psychological theories of recognition by component. It also seems to support Marr's notions

  16. Hybrid Fuzzy Wavelet Neural Networks Architecture Based on Polynomial Neural Networks and Fuzzy Set/Relation Inference-Based Wavelet Neurons.

    Science.gov (United States)

    Huang, Wei; Oh, Sung-Kwun; Pedrycz, Witold

    2017-08-11

    This paper presents a hybrid fuzzy wavelet neural network (HFWNN) realized with the aid of polynomial neural networks (PNNs) and fuzzy inference-based wavelet neurons (FIWNs). Two types of FIWNs including fuzzy set inference-based wavelet neurons (FSIWNs) and fuzzy relation inference-based wavelet neurons (FRIWNs) are proposed. In particular, a FIWN without any fuzzy set component (viz., a premise part of fuzzy rule) becomes a wavelet neuron (WN). To alleviate the limitations of the conventional wavelet neural networks or fuzzy wavelet neural networks whose parameters are determined based on a purely random basis, the parameters of wavelet functions standing in FIWNs or WNs are initialized by using the C-Means clustering method. The overall architecture of the HFWNN is similar to the one of the typical PNNs. The main strategies in the design of HFWNN are developed as follows. First, the first layer of the network consists of FIWNs (e.g., FSIWN or FRIWN) that are used to reflect the uncertainty of data, while the second and higher layers consist of WNs, which exhibit a high level of flexibility and realize a linear combination of wavelet functions. Second, the parameters used in the design of the HFWNN are adjusted through genetic optimization. To evaluate the performance of the proposed HFWNN, several publicly available data are considered. Furthermore a thorough comparative analysis is covered.

  17. An efficient fully unsupervised video object segmentation scheme using an adaptive neural-network classifier architecture.

    Science.gov (United States)

    Doulamis, A; Doulamis, N; Ntalianis, K; Kollias, S

    2003-01-01

    In this paper, an unsupervised video object (VO) segmentation and tracking algorithm is proposed based on an adaptable neural-network architecture. The proposed scheme comprises: 1) a VO tracking module and 2) an initial VO estimation module. Object tracking is handled as a classification problem and implemented through an adaptive network classifier, which provides better results compared to conventional motion-based tracking algorithms. Network adaptation is accomplished through an efficient and cost effective weight updating algorithm, providing a minimum degradation of the previous network knowledge and taking into account the current content conditions. A retraining set is constructed and used for this purpose based on initial VO estimation results. Two different scenarios are investigated. The first concerns extraction of human entities in video conferencing applications, while the second exploits depth information to identify generic VOs in stereoscopic video sequences. Human face/ body detection based on Gaussian distributions is accomplished in the first scenario, while segmentation fusion is obtained using color and depth information in the second scenario. A decision mechanism is also incorporated to detect time instances for weight updating. Experimental results and comparisons indicate the good performance of the proposed scheme even in sequences with complicated content (object bending, occlusion).

  18. Neural network analyses of infrared spectra for classifying cell wall architectures.

    Science.gov (United States)

    McCann, Maureen C; Defernez, Marianne; Urbanowicz, Breeanna R; Tewari, Jagdish C; Langewisch, Tiffany; Olek, Anna; Wells, Brian; Wilson, Reginald H; Carpita, Nicholas C

    2007-03-01

    About 10% of plant genomes are devoted to cell wall biogenesis. Our goal is to establish methodologies that identify and classify cell wall phenotypes of mutants on a genome-wide scale. Toward this goal, we have used a model system, the elongating maize (Zea mays) coleoptile system, in which cell wall changes are well characterized, to develop a paradigm for classification of a comprehensive range of cell wall architectures altered during development, by environmental perturbation, or by mutation. Dynamic changes in cell walls of etiolated maize coleoptiles, sampled at one-half-d intervals of growth, were analyzed by chemical and enzymatic assays and Fourier transform infrared spectroscopy. The primary walls of grasses are composed of cellulose microfibrils, glucuronoarabinoxylans, and mixed-linkage (1 --> 3),(1 --> 4)-beta-D-glucans, together with smaller amounts of glucomannans, xyloglucans, pectins, and a network of polyphenolic substances. During coleoptile development, changes in cell wall composition included a transient appearance of the (1 --> 3),(1 --> 4)-beta-D-glucans, a gradual loss of arabinose from glucuronoarabinoxylans, and an increase in the relative proportion of cellulose. Infrared spectra reflected these dynamic changes in composition. Although infrared spectra of walls from embryonic, elongating, and senescent coleoptiles were broadly discriminated from each other by exploratory principal components analysis, neural network algorithms (both genetic and Kohonen) could correctly classify infrared spectra from cell walls harvested from individuals differing at one-half-d interval of growth. We tested the predictive capabilities of the model with a maize inbred line, Wisconsin 22, and found it to be accurate in classifying cell walls representing developmental stage. The ability of artificial neural networks to classify infrared spectra from cell walls provides a means to identify many possible classes of cell wall phenotypes. This classification

  19. Parallel implementation of high-speed, phase diverse atmospheric turbulence compensation method on a neural network-based architecture

    Science.gov (United States)

    Arrasmith, William W.; Sullivan, Sean F.

    2008-04-01

    Phase diversity imaging methods work well in removing atmospheric turbulence and some system effects from predominantly near-field imaging systems. However, phase diversity approaches can be computationally intensive and slow. We present a recently adapted, high-speed phase diversity method using a conventional, software-based neural network paradigm. This phase-diversity method has the advantage of eliminating many time consuming, computationally heavy calculations and directly estimates the optical transfer function from the entrance pupil phases or phase differences. Additionally, this method is more accurate than conventional Zernike-based, phase diversity approaches and lends itself to implementation on parallel software or hardware architectures. We use computer simulation to demonstrate how this high-speed, phase diverse imaging method can be implemented on a parallel, highspeed, neural network-based architecture-specifically the Cellular Neural Network (CNN). The CNN architecture was chosen as a representative, neural network-based processing environment because 1) the CNN can be implemented in 2-D or 3-D processing schemes, 2) it can be implemented in hardware or software, 3) recent 2-D implementations of CNN technology have shown a 3 orders of magnitude superiority in speed, area, or power over equivalent digital representations, and 4) a complete development environment exists. We also provide a short discussion on processing speed.

  20. Neural Network Ensembles

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Salamon, Peter

    1990-01-01

    We propose several means for improving the performance an training of neural networks for classification. We use crossvalidation as a tool for optimizing network parameters and architecture. We show further that the remaining generalization error can be reduced by invoking ensembles of similar...... networks....

  1. Generalization performance of regularized neural network models

    DEFF Research Database (Denmark)

    Larsen, Jan; Hansen, Lars Kai

    1994-01-01

    Architecture optimization is a fundamental problem of neural network modeling. The optimal architecture is defined as the one which minimizes the generalization error. This paper addresses estimation of the generalization performance of regularized, complete neural network models. Regularization...

  2. Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning.

    Science.gov (United States)

    Shin, Hoo-Chang; Roth, Holger R; Gao, Mingchen; Lu, Le; Xu, Ziyue; Nogues, Isabella; Yao, Jianhua; Mollura, Daniel; Summers, Ronald M

    2016-05-01

    Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets and deep convolutional neural networks (CNNs). CNNs enable learning data-driven, highly representative, hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks.

  3. Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning

    Science.gov (United States)

    Hoo-Chang, Shin; Roth, Holger R.; Gao, Mingchen; Lu, Le; Xu, Ziyue; Nogues, Isabella; Yao, Jianhua; Mollura, Daniel

    2016-01-01

    Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets (i.e. ImageNet) and the revival of deep convolutional neural networks (CNN). CNNs enable learning data-driven, highly representative, layered hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models (supervised) pre-trained from natural image dataset to medical image tasks (although domain transfer between two medical image datasets is also possible). In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computeraided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, with 85% sensitivity at 3 false positive per patient, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance

  4. Neural networks in seismic discrimination

    Energy Technology Data Exchange (ETDEWEB)

    Dowla, F.U.

    1995-01-01

    Neural networks are powerful and elegant computational tools that can be used in the analysis of geophysical signals. At Lawrence Livermore National Laboratory, we have developed neural networks to solve problems in seismic discrimination, event classification, and seismic and hydrodynamic yield estimation. Other researchers have used neural networks for seismic phase identification. We are currently developing neural networks to estimate depths of seismic events using regional seismograms. In this paper different types of network architecture and representation techniques are discussed. We address the important problem of designing neural networks with good generalization capabilities. Examples of neural networks for treaty verification applications are also described.

  5. A self-organized artificial neural network architecture for sensory integration with applications to letter-phoneme integration.

    Science.gov (United States)

    Jantvik, Tamas; Gustafsson, Lennart; Papliński, Andrew P

    2011-08-01

    The multimodal self-organizing network (MMSON), an artificial neural network architecture carrying out sensory integration, is presented here. The architecture is designed using neurophysiological findings and imaging studies that pertain to sensory integration and consists of interconnected lattices of artificial neurons. In this artificial neural architecture, the degree of recognition of stimuli, that is, the perceived reliability of stimuli in the various subnetworks, is included in the computation. The MMSON's behavior is compared to aspects of brain function that deal with sensory integration. According to human behavioral studies, integration of signals from sensory receptors of different modalities enhances perception of objects and events and also reduces time to detection. In neocortex, integration takes place in bimodal and multimodal association areas and result, not only in feedback-mediated enhanced unimodal perception and shortened reaction time, but also in robust bimodal or multimodal percepts. Simulation data from the presented artificial neural network architecture show that it replicates these important psychological and neuroscientific characteristics of sensory integration.

  6. The Role of Architectural and Learning Constraints in Neural Network Models: A Case Study on Visual Space Coding.

    Science.gov (United States)

    Testolin, Alberto; De Filippo De Grazia, Michele; Zorzi, Marco

    2017-01-01

    The recent "deep learning revolution" in artificial neural networks had strong impact and widespread deployment for engineering applications, but the use of deep learning for neurocomputational modeling has been so far limited. In this article we argue that unsupervised deep learning represents an important step forward for improving neurocomputational models of perception and cognition, because it emphasizes the role of generative learning as opposed to discriminative (supervised) learning. As a case study, we present a series of simulations investigating the emergence of neural coding of visual space for sensorimotor transformations. We compare different network architectures commonly used as building blocks for unsupervised deep learning by systematically testing the type of receptive fields and gain modulation developed by the hidden neurons. In particular, we compare Restricted Boltzmann Machines (RBMs), which are stochastic, generative networks with bidirectional connections trained using contrastive divergence, with autoencoders, which are deterministic networks trained using error backpropagation. For both learning architectures we also explore the role of sparse coding, which has been identified as a fundamental principle of neural computation. The unsupervised models are then compared with supervised, feed-forward networks that learn an explicit mapping between different spatial reference frames. Our simulations show that both architectural and learning constraints strongly influenced the emergent coding of visual space in terms of distribution of tuning functions at the level of single neurons. Unsupervised models, and particularly RBMs, were found to more closely adhere to neurophysiological data from single-cell recordings in the primate parietal cortex. These results provide new insights into how basic properties of artificial neural networks might be relevant for modeling neural information processing in biological systems.

  7. The Role of Architectural and Learning Constraints in Neural Network Models: A Case Study on Visual Space Coding

    Science.gov (United States)

    Testolin, Alberto; De Filippo De Grazia, Michele; Zorzi, Marco

    2017-01-01

    The recent “deep learning revolution” in artificial neural networks had strong impact and widespread deployment for engineering applications, but the use of deep learning for neurocomputational modeling has been so far limited. In this article we argue that unsupervised deep learning represents an important step forward for improving neurocomputational models of perception and cognition, because it emphasizes the role of generative learning as opposed to discriminative (supervised) learning. As a case study, we present a series of simulations investigating the emergence of neural coding of visual space for sensorimotor transformations. We compare different network architectures commonly used as building blocks for unsupervised deep learning by systematically testing the type of receptive fields and gain modulation developed by the hidden neurons. In particular, we compare Restricted Boltzmann Machines (RBMs), which are stochastic, generative networks with bidirectional connections trained using contrastive divergence, with autoencoders, which are deterministic networks trained using error backpropagation. For both learning architectures we also explore the role of sparse coding, which has been identified as a fundamental principle of neural computation. The unsupervised models are then compared with supervised, feed-forward networks that learn an explicit mapping between different spatial reference frames. Our simulations show that both architectural and learning constraints strongly influenced the emergent coding of visual space in terms of distribution of tuning functions at the level of single neurons. Unsupervised models, and particularly RBMs, were found to more closely adhere to neurophysiological data from single-cell recordings in the primate parietal cortex. These results provide new insights into how basic properties of artificial neural networks might be relevant for modeling neural information processing in biological systems. PMID:28377709

  8. Research on architecture of intelligent design platform for artificial neural network expert system

    Science.gov (United States)

    Gu, Honghong

    2017-09-01

    Based on the review of the development and current situation of CAD technology, the necessity of combination of artificial neural network and expert system, and then present an intelligent design system based on artificial neural network. Moreover, it discussed the feasibility of realization of a design-oriented expert system development tools on the basis of above combination. In addition, knowledge representation strategy and method and the solving process are given in this paper.

  9. Semigroup based neural network architecture for extrapolation of mass unbalance for rotating machines in power plants

    Energy Technology Data Exchange (ETDEWEB)

    Kim, B.H.; Velas, J.P.; Lee, K.Y [Pennsylvania State Univ., University Park, PA (United States). Dept. of Electrical Engineering

    2006-07-01

    This paper presented a mathematical method that power plant operators can use to estimate rotational mass unbalance, which is the most common source of vibration in turbine generators. An unbalanced rotor or driveshaft causes vibration and stress in the rotating part and in its supporting structure. As such, balancing the rotating part is important to minimize structural stress, minimize operator annoyance and fatigue, increase bearing life, or minimize power loss. The newly proposed method for estimating vibration on a turbine generator uses mass unbalance extrapolation based on a modified system-type neural network architecture, notably the semigroup theory used to study differential equations, partial differential equations and their combinations. Rather than relying on inaccurate vibration measurements, this method extrapolates a set of reliable mass unbalance readings from a common source of vibration. Given a set of empirical data with no analytic expression, the authors first developed an analytic description and then extended that model along a single axis. The algebraic decomposition which was used to obtain the analytic description of empirical data in the semigroup form involved the product of a coefficient vector and a basis set of vectors. The proposed approach was simulated on empirical data. The concept can also be tested in many other engineering and non-engineering problems. 23 refs., 11 figs.

  10. An artificial neural network architecture for non-parametric visual odometry in wireless capsule endoscopy

    Science.gov (United States)

    Dimas, George; Iakovidis, Dimitris K.; Karargyris, Alexandros; Ciuti, Gastone; Koulaouzidis, Anastasios

    2017-09-01

    Wireless capsule endoscopy is a non-invasive screening procedure of the gastrointestinal (GI) tract performed with an ingestible capsule endoscope (CE) of the size of a large vitamin pill. Such endoscopes are equipped with a usually low-frame-rate color camera which enables the visualization of the GI lumen and the detection of pathologies. The localization of the commercially available CEs is performed in the 3D abdominal space using radio-frequency (RF) triangulation from external sensor arrays, in combination with transit time estimation. State-of-the-art approaches, such as magnetic localization, which have been experimentally proved more accurate than the RF approach, are still at an early stage. Recently, we have demonstrated that CE localization is feasible using solely visual cues and geometric models. However, such approaches depend on camera parameters, many of which are unknown. In this paper the authors propose a novel non-parametric visual odometry (VO) approach to CE localization based on a feed-forward neural network architecture. The effectiveness of this approach in comparison to state-of-the-art geometric VO approaches is validated using a robotic-assisted in vitro experimental setup.

  11. On design and evaluation of tapped-delay neural network architectures

    DEFF Research Database (Denmark)

    Svarer, Claus; Hansen, Lars Kai; Larsen, Jan

    1993-01-01

    Pruning and evaluation of tapped-delay neural networks for the sunspot benchmark series are addressed. It is shown that the generalization ability of the networks can be improved by pruning using the optimal brain damage method of Le Cun, Denker and Solla. A stop criterion for the pruning algorithm...

  12. Emergence of the small-world architecture in neural networks by activity dependent growth

    Science.gov (United States)

    Gafarov, F. M.

    2016-11-01

    In this paper, we propose a model describing the growth and development of neural networks based on the latest achievements of experimental neuroscience. The model is based on two evolutionary equations. The first equation is for the evolution of the neurons state and the second is for the growth of axon tips. By using the model, we demonstrated the neuronal growth process from disconnected neurons to fully connected three-dimensional networks. For the analysis of the network's connections structure, we used the random graphs theory methods. It is shown that the growth in neural networks results in the formation of a well-known "small-world" network model. The analysis of the connectivity distribution shows the presence of a strictly non-Gaussian but no scale-free degree distribution for the in-degree node distribution. In terms of the graphs theory, this study developed a new model of dynamic graph.

  13. Fixed latency on-chip interconnect for hardware spiking neural network architectures

    NARCIS (Netherlands)

    Pande, Sandeep; Morgan, Fearghal; Smit, Gerard; Bruintjes, Tom; Rutgers, Jochem; Cawley, Seamus; Harkin, Jim; McDaid, Liam

    2013-01-01

    Information in a Spiking Neural Network (SNN) is encoded as the relative timing between spikes. Distortion in spike timings can impact the accuracy of SNN operation by modifying the precise firing time of neurons within the SNN. Maintaining the integrity of spike timings is crucial for reliable oper

  14. Neural networks for triggering

    Energy Technology Data Exchange (ETDEWEB)

    Denby, B. (Fermi National Accelerator Lab., Batavia, IL (USA)); Campbell, M. (Michigan Univ., Ann Arbor, MI (USA)); Bedeschi, F. (Istituto Nazionale di Fisica Nucleare, Pisa (Italy)); Chriss, N.; Bowers, C. (Chicago Univ., IL (USA)); Nesti, F. (Scuola Normale Superiore, Pisa (Italy))

    1990-01-01

    Two types of neural network beauty trigger architectures, based on identification of electrons in jets and recognition of secondary vertices, have been simulated in the environment of the Fermilab CDF experiment. The efficiencies for B's and rejection of background obtained are encouraging. If hardware tests are successful, the electron identification architecture will be tested in the 1991 run of CDF. 10 refs., 5 figs., 1 tab.

  15. Neural Networks

    Directory of Open Access Journals (Sweden)

    Schwindling Jerome

    2010-04-01

    Full Text Available This course presents an overview of the concepts of the neural networks and their aplication in the framework of High energy physics analyses. After a brief introduction on the concept of neural networks, the concept is explained in the frame of neuro-biology, introducing the concept of multi-layer perceptron, learning and their use as data classifer. The concept is then presented in a second part using in more details the mathematical approach focussing on typical use cases faced in particle physics. Finally, the last part presents the best way to use such statistical tools in view of event classifers, putting the emphasis on the setup of the multi-layer perceptron. The full article (15 p. corresponding to this lecture is written in french and is provided in the proceedings of the book SOS 2008.

  16. A Multithread Nested Neural Network Architecture to Model Surface Plasmon Polaritons Propagation

    Directory of Open Access Journals (Sweden)

    Giacomo Capizzi

    2016-06-01

    Full Text Available Surface Plasmon Polaritons are collective oscillations of electrons occurring at the interface between a metal and a dielectric. The propagation phenomena in plasmonic nanostructures is not fully understood and the interdependence between propagation and metal thickness requires further investigation. We propose an ad-hoc neural network topology assisting the study of the said propagation when several parameters, such as wavelengths, propagation length and metal thickness are considered. This approach is novel and can be considered a first attempt at fully automating such a numerical computation. For the proposed neural network topology, an advanced training procedure has been devised in order to shun the possibility of accumulating errors. The provided results can be useful, e.g., to improve the efficiency of photocells, for photon harvesting, and for improving the accuracy of models for solid state devices.

  17. Designing the Architecture of Hierachical Neural Networks Model Attention, Learning and Goal-Oriented Behavior

    Science.gov (United States)

    1993-12-31

    Adaptive Robot Control," Journal of Adaptive Control and Signal Processing. (Accepted. Expected to Appear 9/90) [25] Nevo , I., Guez, A., Ahmed, F...Anesthesia Critical Care and Cardio-Pulmonary Medicine, Netherland, October 1990. [26] Ahmed, F., Nevo I., Guez, A., "Anesthesiologist’s Adaptive...Based Adaptive Pole Placement for Neurocontrollers," Neural Networks, Vol.4, 1991, pp. 319-335. [33] Nevo , I., Guez, A., Ahmed, F., Roth, J.,"Vital

  18. Adaptive neural network in a hybrid optical/electronic architecture using lateral inhibition.

    Science.gov (United States)

    Groot, P J; Noll, R J

    1989-09-15

    We report the optical implementation of a neural network based on a nearest matched filter algorithm and extensive lateral inhibition. Extremely rapid learning is demonstrated in pattern recognition and autonomous control applications, without introducing processing artifacts such as spurious states and ambiguous solutions. The optical implementation is achieved with a reconfigurable, bipolar mask-type crossbar switch based on an inexpensive liquid crystal spatial light modulator.

  19. Non-homogenous neural networks with chaotic recursive nodes: connectivity and multi-assemblies structures in recursive processing elements architectures.

    Science.gov (United States)

    Del Moral Hernandez, Emilio

    2005-01-01

    This paper addresses recurrent neural architectures based on bifurcating nodes that exhibit chaotic dynamics, with local dynamics defined by first order parametric recursions. In the studied architectures, logistic recursive nodes interact through parametric coupling, they self organize, and the network evolves to global spatio-temporal period-2 attractors that encode stored patterns. The performance of associative memories arrangements is measured through the average error in pattern recovery, under several levels of prompting noise. The impact of the synaptic connections magnitude on architecture performance is analyzed in detail, through pattern recovery performance measures and basin of attraction characterization. The importance of a planned choice of the synaptic connections scale in RPEs architectures is shown. A strategy for minimizing pattern recovery degradation when the number of stored patterns increases is developed. Experimental results show the success of such strategy. Mechanisms for allowing the studied associative networks to deal with asynchronous changes in input patterns, and tools for the interconnection between different associative assemblies are developed. Finally, coupling in heterogeneous assemblies with diverse recursive maps is analyzed, and the associated synaptic connections are equated.

  20. Compressing Convolutional Neural Networks

    OpenAIRE

    Chen, Wenlin; Wilson, James T.; Tyree, Stephen; Weinberger, Kilian Q.; Chen, Yixin

    2015-01-01

    Convolutional neural networks (CNN) are increasingly used in many areas of computer vision. They are particularly attractive because of their ability to "absorb" great quantities of labeled data through millions of parameters. However, as model sizes increase, so do the storage and memory requirements of the classifiers. We present a novel network architecture, Frequency-Sensitive Hashed Nets (FreshNets), which exploits inherent redundancy in both convolutional layers and fully-connected laye...

  1. Plant Growth Models Using Artificial Neural Networks

    Science.gov (United States)

    Bubenheim, David

    1997-01-01

    In this paper, we descrive our motivation and approach to devloping models and the neural network architecture. Initial use of the artificial neural network for modeling the single plant process of transpiration is presented.

  2. Exploring multiple feature combination strategies with a recurrent neural network architecture for off-line handwriting recognition

    Science.gov (United States)

    Mioulet, L.; Bideault, G.; Chatelain, C.; Paquet, T.; Brunessaux, S.

    2015-01-01

    The BLSTM-CTC is a novel recurrent neural network architecture that has outperformed previous state of the art algorithms in tasks such as speech recognition or handwriting recognition. It has the ability to process long term dependencies in temporal signals in order to label unsegmented data. This paper describes different ways of combining features using a BLSTM-CTC architecture. Not only do we explore the low level combination (feature space combination) but we also explore high level combination (decoding combination) and mid-level (internal system representation combination). The results are compared on the RIMES word database. Our results show that the low level combination works best, thanks to the powerful data modeling of the LSTM neurons.

  3. Deep architecture neural network-based real-time image processing for image-guided radiotherapy.

    Science.gov (United States)

    Mori, Shinichiro

    2017-08-01

    To develop real-time image processing for image-guided radiotherapy, we evaluated several neural network models for use with different imaging modalities, including X-ray fluoroscopic image denoising. Setup images of prostate cancer patients were acquired with two oblique X-ray fluoroscopic units. Two types of residual network were designed: a convolutional autoencoder (rCAE) and a convolutional neural network (rCNN). We changed the convolutional kernel size and number of convolutional layers for both networks, and the number of pooling and upsampling layers for rCAE. The ground-truth image was applied to the contrast-limited adaptive histogram equalization (CLAHE) method of image processing. Network models were trained to keep the quality of the output image close to that of the ground-truth image from the input image without image processing. For image denoising evaluation, noisy input images were used for the training. More than 6 convolutional layers with convolutional kernels >5×5 improved image quality. However, this did not allow real-time imaging. After applying a pair of pooling and upsampling layers to both networks, rCAEs with >3 convolutions each and rCNNs with >12 convolutions with a pair of pooling and upsampling layers achieved real-time processing at 30 frames per second (fps) with acceptable image quality. Use of our suggested network achieved real-time image processing for contrast enhancement and image denoising by the use of a conventional modern personal computer. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  4. A neural learning approach for adaptive image restoration using a fuzzy model-based network architecture.

    Science.gov (United States)

    Wong, H S; Guan, L

    2001-01-01

    We address the problem of adaptive regularization in image restoration by adopting a neural-network learning approach. Instead of explicitly specifying the local regularization parameter values, they are regarded as network weights which are then modified through the supply of appropriate training examples. The desired response of the network is in the form of a gray level value estimate of the current pixel using weighted order statistic (WOS) filter. However, instead of replacing the previous value with this estimate, this is used to modify the network weights, or equivalently, the regularization parameters such that the restored gray level value produced by the network is closer to this desired response. In this way, the single WOS estimation scheme can allow appropriate parameter values to emerge under different noise conditions, rather than requiring their explicit selection in each occasion. In addition, we also consider the separate regularization of edges and textures due to their different noise masking capabilities. This in turn requires discriminating between these two feature types. Due to the inability of conventional local variance measures to distinguish these two high variance features, we propose the new edge-texture characterization (ETC) measure which performs this discrimination based on a scalar value only. This is then incorporated into a fuzzified form of the previous neural network which determines the degree of membership of each high variance pixel in two fuzzy sets, the EDGE and TEXTURE fuzzy sets, from the local ETC value, and then evaluates the appropriate regularization parameter by appropriately combining these two membership function values.

  5. A bi-recursive neural network architecture for the prediction of protein coarse contact maps.

    Science.gov (United States)

    Vullo, Alessandro; Frasconi, Paolo

    2002-01-01

    Prediction of contact maps may be seen as a strategic step towards the solution of fundamental open problems in structural genomics. In this paper we focus on coarse grained maps that describe the spatial neighborhood relation between secondary structure elements (helices, strands, and coils) of a protein. We introduce a new machine learning approach for scoring candidate contact maps. The method combines a specialized noncausal recursive connectionist architecture and a heuristic graph search algorithm. The network is trained using candidate graphs generated during search. We show how the process of selecting and generating training examples is important for tuning the precision of the predictor.

  6. Neural network and fuzzy logic based secondary cells charging algorithm development and the controller architecture for implementation

    Science.gov (United States)

    Ullah, Muhammed Zafar

    Neural Network and Fuzzy Logic are the two key technologies that have recently received growing attention in solving real world, nonlinear, time variant problems. Because of their learning and/or reasoning capabilities, these techniques do not need a mathematical model of the system, which may be difficult, if not impossible, to obtain for complex systems. One of the major problems in portable or electric vehicle world is secondary cell charging, which shows non-linear characteristics. Portable-electronic equipment, such as notebook computers, cordless and cellular telephones and cordless-electric lawn tools use batteries in increasing numbers. These consumers demand fast charging times, increased battery lifetime and fuel gauge capabilities. All of these demands require that the state-of-charge within a battery be known. Charging secondary cells Fast is a problem, which is difficult to solve using conventional techniques. Charge control is important in fast charging, preventing overcharging and improving battery life. This research work provides a quick and reliable approach to charger design using Neural-Fuzzy technology, which learns the exact battery charging characteristics. Neural-Fuzzy technology is an intelligent combination of neural net with fuzzy logic that learns system behavior by using system input-output data rather than mathematical modeling. The primary objective of this research is to improve the secondary cell charging algorithm and to have faster charging time based on neural network and fuzzy logic technique. Also a new architecture of a controller will be developed for implementing the charging algorithm for the secondary battery.

  7. Neural codes of seeing architectural styles.

    Science.gov (United States)

    Choo, Heeyoung; Nasar, Jack L; Nikrahei, Bardia; Walther, Dirk B

    2017-01-10

    Images of iconic buildings, such as the CN Tower, instantly transport us to specific places, such as Toronto. Despite the substantial impact of architectural design on people's visual experience of built environments, we know little about its neural representation in the human brain. In the present study, we have found patterns of neural activity associated with specific architectural styles in several high-level visual brain regions, but not in primary visual cortex (V1). This finding suggests that the neural correlates of the visual perception of architectural styles stem from style-specific complex visual structure beyond the simple features computed in V1. Surprisingly, the network of brain regions representing architectural styles included the fusiform face area (FFA) in addition to several scene-selective regions. Hierarchical clustering of error patterns further revealed that the FFA participated to a much larger extent in the neural encoding of architectural styles than entry-level scene categories. We conclude that the FFA is involved in fine-grained neural encoding of scenes at a subordinate-level, in our case, architectural styles of buildings. This study for the first time shows how the human visual system encodes visual aspects of architecture, one of the predominant and longest-lasting artefacts of human culture.

  8. Comparison of Artificial Neural Network Architecture in Solving Ordinary Differential Equations

    Directory of Open Access Journals (Sweden)

    Susmita Mall

    2013-01-01

    Full Text Available This paper investigates the solution of Ordinary Differential Equations (ODEs with initial conditions using Regression Based Algorithm (RBA and compares the results with arbitrary- and regression-based initial weights for different numbers of nodes in hidden layer. Here, we have used feed forward neural network and error back propagation method for minimizing the error function and for the modification of the parameters (weights and biases. Initial weights are taken as combination of random as well as by the proposed regression based model. We present the method for solving a variety of problems and the results are compared. Here, the number of nodes in hidden layer has been fixed according to the degree of polynomial in the regression fitting. For this, the input and output data are fitted first with various degree polynomials using regression analysis and the coefficients involved are taken as initial weights to start with the neural training. Fixing of the hidden nodes depends upon the degree of the polynomial. For the example problems, the analytical results have been compared with neural results with arbitrary and regression based weights with four, five, and six nodes in hidden layer and are found to be in good agreement.

  9. Neural Network Applications

    NARCIS (Netherlands)

    Vonk, E.; Jain, L.C.; Veelenturf, L.P.J.

    1995-01-01

    Artificial neural networks, also called neural networks, have been used successfully in many fields including engineering, science and business. This paper presents the implementation of several neural network simulators and their applications in character recognition and other engineering areas

  10. 3D High Resolution Mesh Deformation Based on Multi Library Wavelet Neural Network Architecture

    Science.gov (United States)

    Dhibi, Naziha; Elkefi, Akram; Bellil, Wajdi; Amar, Chokri Ben

    2016-12-01

    This paper deals with the features of a novel technique for large Laplacian boundary deformations using estimated rotations. The proposed method is based on a Multi Library Wavelet Neural Network structure founded on several mother wavelet families (MLWNN). The objective is to align features of mesh and minimize distortion with a fixed feature that minimizes the sum of the distances between all corresponding vertices. New mesh deformation method worked in the domain of Region of Interest (ROI). Our approach computes deformed ROI, updates and optimizes it to align features of mesh based on MLWNN and spherical parameterization configuration. This structure has the advantage of constructing the network by several mother wavelets to solve high dimensions problem using the best wavelet mother that models the signal better. The simulation test achieved the robustness and speed considerations when developing deformation methodologies. The Mean-Square Error and the ratio of deformation are low compared to other works from the state of the art. Our approach minimizes distortions with fixed features to have a well reconstructed object.

  11. Meta-Learning Evolutionary Artificial Neural Networks

    OpenAIRE

    Abraham, Ajith

    2004-01-01

    In this paper, we present MLEANN (Meta-Learning Evolutionary Artificial Neural Network), an automatic computational framework for the adaptive optimization of artificial neural networks wherein the neural network architecture, activation function, connection weights; learning algorithm and its parameters are adapted according to the problem. We explored the performance of MLEANN and conventionally designed artificial neural networks for function approximation problems. To evaluate the compara...

  12. Building a Chaotic Proved Neural Network

    CERN Document Server

    Bahi, Jacques M; Salomon, Michel

    2011-01-01

    Chaotic neural networks have received a great deal of attention these last years. In this paper we establish a precise correspondence between the so-called chaotic iterations and a particular class of artificial neural networks: global recurrent multi-layer perceptrons. We show formally that it is possible to make these iterations behave chaotically, as defined by Devaney, and thus we obtain the first neural networks proven chaotic. Several neural networks with different architectures are trained to exhibit a chaotical behavior.

  13. Convolutional neural network based deep-learning architecture for prostate cancer detection on multiparametric magnetic resonance images

    Science.gov (United States)

    Tsehay, Yohannes K.; Lay, Nathan S.; Roth, Holger R.; Wang, Xiaosong; Kwak, Jin Tae; Turkbey, Baris I.; Pinto, Peter A.; Wood, Brad J.; Summers, Ronald M.

    2017-03-01

    Prostate cancer (PCa) is the second most common cause of cancer related deaths in men. Multiparametric MRI (mpMRI) is the most accurate imaging method for PCa detection; however, it requires the expertise of experienced radiologists leading to inconsistency across readers of varying experience. To increase inter-reader agreement and sensitivity, we developed a computer-aided detection (CAD) system that can automatically detect lesions on mpMRI that readers can use as a reference. We investigated a convolutional neural network based deep-learing (DCNN) architecture to find an improved solution for PCa detection on mpMRI. We adopted a network architecture from a state-of-the-art edge detector that takes an image as an input and produces an image probability map. Two-fold cross validation along with a receiver operating characteristic (ROC) analysis and free-response ROC (FROC) were used to determine our deep-learning based prostate-CAD's (CADDL) performance. The efficacy was compared to an existing prostate CAD system that is based on hand-crafted features, which was evaluated on the same test-set. CADDL had an 86% detection rate at 20% false-positive rate while the top-down learning CAD had 80% detection rate at the same false-positive rate, which translated to 94% and 85% detection rate at 10 false-positives per patient on the FROC. A CNN based CAD is able to detect cancerous lesions on mpMRI of the prostate with results comparable to an existing prostate-CAD showing potential for further development.

  14. Medical diagnosis using neural network

    CERN Document Server

    Kamruzzaman, S M; Siddiquee, Abu Bakar; Mazumder, Md Ehsanul Hoque

    2010-01-01

    This research is to search for alternatives to the resolution of complex medical diagnosis where human knowledge should be apprehended in a general fashion. Successful application examples show that human diagnostic capabilities are significantly worse than the neural diagnostic system. This paper describes a modified feedforward neural network constructive algorithm (MFNNCA), a new algorithm for medical diagnosis. The new constructive algorithm with backpropagation; offer an approach for the incremental construction of near-minimal neural network architectures for pattern classification. The algorithm starts with minimal number of hidden units in the single hidden layer; additional units are added to the hidden layer one at a time to improve the accuracy of the network and to get an optimal size of a neural network. The MFNNCA was tested on several benchmarking classification problems including the cancer, heart disease and diabetes. Experimental results show that the MFNNCA can produce optimal neural networ...

  15. Stability prediction of berm breakwater using neural network

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Rao, S.; Manjunath, Y.R.

    . In order to allow the network to learn both non-linear and linear relationships between input nodes and output nodes, multiple-layer networks are often used. Among many neural network architectures, the three layers feed forward backpropagation neural...

  16. Morphological neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Ritter, G.X.; Sussner, P. [Univ. of Florida, Gainesville, FL (United States)

    1996-12-31

    The theory of artificial neural networks has been successfully applied to a wide variety of pattern recognition problems. In this theory, the first step in computing the next state of a neuron or in performing the next layer neural network computation involves the linear operation of multiplying neural values by their synaptic strengths and adding the results. Thresholding usually follows the linear operation in order to provide for nonlinearity of the network. In this paper we introduce a novel class of neural networks, called morphological neural networks, in which the operations of multiplication and addition are replaced by addition and maximum (or minimum), respectively. By taking the maximum (or minimum) of sums instead of the sum of products, morphological network computation is nonlinear before thresholding. As a consequence, the properties of morphological neural networks are drastically different than those of traditional neural network models. In this paper we consider some of these differences and provide some particular examples of morphological neural network.

  17. NARX neural networks for sequence processing tasks

    OpenAIRE

    Hristev, Eugen

    2012-01-01

    This project aims at researching and implementing a neural network architecture system for the NARX (Nonlinear AutoRegressive with eXogenous inputs) model, used in sequence processing tasks and particularly in time series prediction. The model can fallback to different types of architectures including time-delay neural networks and multi layer perceptron. The NARX simulator tests and compares the different architectures for both synthetic and real data, including the time series o...

  18. Constructive neural network learning

    OpenAIRE

    Lin, Shaobo; Zeng, Jinshan; Zhang, Xiaoqin

    2016-01-01

    In this paper, we aim at developing scalable neural network-type learning systems. Motivated by the idea of "constructive neural networks" in approximation theory, we focus on "constructing" rather than "training" feed-forward neural networks (FNNs) for learning, and propose a novel FNNs learning system called the constructive feed-forward neural network (CFN). Theoretically, we prove that the proposed method not only overcomes the classical saturation problem for FNN approximation, but also ...

  19. Generalized classifier neural network.

    Science.gov (United States)

    Ozyildirim, Buse Melis; Avci, Mutlu

    2013-03-01

    In this work a new radial basis function based classification neural network named as generalized classifier neural network, is proposed. The proposed generalized classifier neural network has five layers, unlike other radial basis function based neural networks such as generalized regression neural network and probabilistic neural network. They are input, pattern, summation, normalization and output layers. In addition to topological difference, the proposed neural network has gradient descent based optimization of smoothing parameter approach and diverge effect term added calculation improvements. Diverge effect term is an improvement on summation layer calculation to supply additional separation ability and flexibility. Performance of generalized classifier neural network is compared with that of the probabilistic neural network, multilayer perceptron algorithm and radial basis function neural network on 9 different data sets and with that of generalized regression neural network on 3 different data sets include only two classes in MATLAB environment. Better classification performance up to %89 is observed. Improved classification performances proved the effectivity of the proposed neural network.

  20. Neural networks with chaotic recursive nodes: techniques for the design of associative memories, contrast with Hopfield architectures, and extensions for time-dependent inputs.

    Science.gov (United States)

    Del-Moral-Hernandez, Emilio

    2003-01-01

    This paper addresses the coding and storage of information in neural architectures with bifurcating recursive nodes that exhibit chaotic dynamics. It describes architectures of coupled recursive processing elements (RPEs) used to store binary strings, discusses the choices of network parameters related to the coding of zeros and ones, and analyzes several aspects of the network operation in implementing associative memories through populations of logistic maps. Experiments for the performance evaluation of these memories are described, and results addressing the operation under digital noise (flipped bits) and analog noise added to the prompting pattern are presented and analyzed. Quantitative aspects related to the representation of binary strings through cyclic states are equated, and then related to the planning and analysis of several experiments. A simple pre-processing procedure useful in situations of prompting conditions with analog noise is proposed, and the resultant increase in recovery performance presented. The performance of the RPEs associative networks is contrasted with the performance of Hopfield associative memories, and the situations where the RPEs networks present significant superiority are identified. An extended version of the proposed architecture, which allows to address the issues of time-dependent inputs and analog inputs, is analyzed in detail. Experimental results are presented, and the role of this extended architecture in providing mechanisms for modular RPEs architectures is pointed out.

  1. Using genetic algorithms to select architecture of a feedforward artificial neural network

    Science.gov (United States)

    Arifovic, Jasmina; Gençay, Ramazan

    2001-01-01

    This paper proposes a model selection methodology for feedforward network models based on the genetic algorithms and makes a number of distinct but inter-related contributions to the model selection literature for the feedforward networks. First, we construct a genetic algorithm which can search for the global optimum of an arbitrary function as the output of a feedforward network model. Second, we allow the genetic algorithm to evolve the type of inputs, the number of hidden units and the connection structure between the inputs and the output layers. Third, we study how introduction of a local elitist procedure which we call the election operator affects the algorithm's performance. We conduct a Monte Carlo simulation to study the sensitiveness of the global approximation properties of the studied genetic algorithm. Finally, we apply the proposed methodology to the daily foreign exchange returns.

  2. Bridging LSTM Architecture and the Neural Dynamics during Reading

    OpenAIRE

    Qian, Peng; Qiu, Xipeng; Huang, Xuanjing

    2016-01-01

    Recently, the long short-term memory neural network (LSTM) has attracted wide interest due to its success in many tasks. LSTM architecture consists of a memory cell and three gates, which looks similar to the neuronal networks in the brain. However, there still lacks the evidence of the cognitive plausibility of LSTM architecture as well as its working mechanism. In this paper, we study the cognitive plausibility of LSTM by aligning its internal architecture with the brain activity observed v...

  3. Heterogeneous network architectures

    DEFF Research Database (Denmark)

    Christiansen, Henrik Lehrmann

    2006-01-01

    Future networks will be heterogeneous! Due to the sheer size of networks (e.g., the Internet) upgrades cannot be instantaneous and thus heterogeneity appears. This means that instead of trying to find the olution, networks hould be designed as being heterogeneous. One of the key equirements here...... is flexibility. This thesis investigates such heterogeneous network architectures and how to make them flexible. A survey of algorithms for network design is presented, and it is described how using heuristics can increase the speed. A hierarchical, MPLS based network architecture is described...... and it is discussed that it is advantageous to heterogeneous networks and illustrated by a number of examples. Modeling and simulation is a well-known way of doing performance evaluation. An approach to event-driven simulation of communication networks is presented and mixed complexity modeling, which can simplify...

  4. Chaotic diagonal recurrent neural network

    Institute of Scientific and Technical Information of China (English)

    Wang Xing-Yuan; Zhang Yi

    2012-01-01

    We propose a novel neural network based on a diagonal recurrent neural network and chaos,and its structure andlearning algorithm are designed.The multilayer feedforward neural network,diagonal recurrent neural network,and chaotic diagonal recurrent neural network are used to approach the cubic symmetry map.The simulation results show that the approximation capability of the chaotic diagonal recurrent neural network is better than the other two neural networks.

  5. Artificial Neural Networks

    OpenAIRE

    Chung-Ming Kuan

    2006-01-01

    Artificial neural networks (ANNs) constitute a class of flexible nonlinear models designed to mimic biological neural systems. In this entry, we introduce ANN using familiar econometric terminology and provide an overview of ANN modeling approach and its implementation methods.

  6. 自适应前馈神经网络结构优化设计%An adaptive algorithm for designing optimal feed-forward neural network architecture

    Institute of Scientific and Technical Information of China (English)

    张昭昭; 乔俊飞; 杨刚

    2011-01-01

    针对多数前馈神经网络结构设计算法采取贪婪搜索策略而易陷入局部最优结构的问题,提出一种自适应前馈神经网络结构设计算法.该算法在网络训练过程中采取自适应寻优策略合并和分裂隐节点,达到设计最优神经网络结构的目的.在合并操作中,以互信息为准则对输出线性相关的隐节点进行合并;在分裂操作中,引入变异系数,有助于跳出局部最优网络结构.算法将合并和分裂操作之后的权值调整与网络对样本的学习过程结合,减少了网络对样本的学习次数,提高了网络的学习速度,增强了网络的泛化性能.非线性函数逼近结果表明,所提算法能得到更小的检测误差,最终网络结构紧凑.%Due to the fact that most algorithms use a greedy strategy in designing artificial neural networks which are susceptible to becoming trapped at the architectural local optimal point, an adaptive algorithm for designing an optimal feed-forward neural network was proposed. During the training process of the neural network, the adaptive optimization strategy was adopted to merge and split the hidden unit to design optimal neural network architecture. In the merge operation, the hidden units were merged based on mutual information criterion. In the split operation, a mutation coefficient was introduced to help jump out of locally optimal network. The process of adjusting the connection weight after merge and split operations was combined with the process of training the neural network. Therefore, the number of training samples was reduced, the training speed was increased, and the generalization performance was improved. The results of approximating non-linear functions show that the proposed algorithm can limit testing errors and a compact neural network structure.

  7. Load forecasting using different architectures of neural networks with the assistance of the MATLAB toolboxes; Previsao de cargas eletricas utilizando diferentes arquiteturas de redes neurais artificiais com o auxilio das toolboxes do MATLAB

    Energy Technology Data Exchange (ETDEWEB)

    Nose Filho, Kenji; Araujo, Klayton A.M.; Maeda, Jorge L.Y.; Lotufo, Anna Diva P. [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Ilha Solteira, SP (Brazil)], Emails: kenjinose@yahoo.com.br, klayton_ama@hotmail.com, jorge-maeda@hotmail.com, annadiva@dee.feis.unesp.br

    2009-07-01

    This paper presents a development and implementation of a program to electrical load forecasting with data from a Brazilian electrical company, using four different architectures of neural networks of the MATLAB toolboxes: multilayer backpropagation gradient descendent with momentum, multilayer backpropagation Levenberg-Marquardt, adaptive network based fuzzy inference system and general regression neural network. The program presented a satisfactory performance, guaranteeing very good results. (author)

  8. Balanced Neural Architecture and the Idling Brain

    Directory of Open Access Journals (Sweden)

    Brent eDoiron

    2014-05-01

    Full Text Available A signature feature of cortical spike trains is their trial-to-trial variability. This variability is large in spontaneous conditions and is reduced when cortex is driven by a stimulus or task. Models of recurrent cortical networks with unstructured, yet balanced, excitation and inhibition generate variability consistent with evoked conditions. However, these models lack the long timescale fluctuations and large variability present in spontaneous conditions. We propose that global network architectures which support a large number of stable states (attractor networks allow balanced networks to capture key features of neural variability in both spontaneous and evoked conditions. We illustrate this using balanced spiking networks with clustered assembly, feedforward chain, and ring structures. By assuming that global network structure is related to stimulus preference, we show that signal correlations are related to the magnitude of correlations in the spontaneous state. In our models, the dynamics of spontaneous activity encompasses much of the possible evoked states, consistent with many experimental reports. Finally, we contrast the impact of stimulation on the trial-to-trial variability in attractor networks with that of strongly coupled spiking networks with chaotic firing rate instabilities, recently investigated by Ostojic (2014. We find that only attractor networks replicate an experimentally observed stimulus-induced quenching of trial-to-trial variability. In total, the comparison of the trial-variable dynamics of single neurons or neuron pairs during spontaneous and evoked activity can be a window into the global structure of balanced cortical networks.

  9. Balanced neural architecture and the idling brain.

    Science.gov (United States)

    Doiron, Brent; Litwin-Kumar, Ashok

    2014-01-01

    A signature feature of cortical spike trains is their trial-to-trial variability. This variability is large in the spontaneous state and is reduced when cortex is driven by a stimulus or task. Models of recurrent cortical networks with unstructured, yet balanced, excitation and inhibition generate variability consistent with evoked conditions. However, these models produce spike trains which lack the long timescale fluctuations and large variability exhibited during spontaneous cortical dynamics. We propose that global network architectures which support a large number of stable states (attractor networks) allow balanced networks to capture key features of neural variability in both spontaneous and evoked conditions. We illustrate this using balanced spiking networks with clustered assembly, feedforward chain, and ring structures. By assuming that global network structure is related to stimulus preference, we show that signal correlations are related to the magnitude of correlations in the spontaneous state. Finally, we contrast the impact of stimulation on the trial-to-trial variability in attractor networks with that of strongly coupled spiking networks with chaotic firing rate instabilities, recently investigated by Ostojic (2014). We find that only attractor networks replicate an experimentally observed stimulus-induced quenching of trial-to-trial variability. In total, the comparison of the trial-variable dynamics of single neurons or neuron pairs during spontaneous and evoked activity can be a window into the global structure of balanced cortical networks.

  10. Neural networks and applications tutorial

    Science.gov (United States)

    Guyon, I.

    1991-09-01

    The importance of neural networks has grown dramatically during this decade. While only a few years ago they were primarily of academic interest, now dozens of companies and many universities are investigating the potential use of these systems and products are beginning to appear. The idea of building a machine whose architecture is inspired by that of the brain has roots which go far back in history. Nowadays, technological advances of computers and the availability of custom integrated circuits, permit simulations of hundreds or even thousands of neurons. In conjunction, the growing interest in learning machines, non-linear dynamics and parallel computation spurred renewed attention in artificial neural networks. Many tentative applications have been proposed, including decision systems (associative memories, classifiers, data compressors and optimizers), or parametric models for signal processing purposes (system identification, automatic control, noise canceling, etc.). While they do not always outperform standard methods, neural network approaches are already used in some real world applications for pattern recognition and signal processing tasks. The tutorial is divided into six lectures, that where presented at the Third Graduate Summer Course on Computational Physics (September 3-7, 1990) on Parallel Architectures and Applications, organized by the European Physical Society: (1) Introduction: machine learning and biological computation. (2) Adaptive artificial neurons (perceptron, ADALINE, sigmoid units, etc.): learning rules and implementations. (3) Neural network systems: architectures, learning algorithms. (4) Applications: pattern recognition, signal processing, etc. (5) Elements of learning theory: how to build networks which generalize. (6) A case study: a neural network for on-line recognition of handwritten alphanumeric characters.

  11. Drift chamber tracking with neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Lindsey, C.S.; Denby, B.; Haggerty, H.

    1992-10-01

    We discuss drift chamber tracking with a commercial log VLSI neural network chip. Voltages proportional to the drift times in a 4-layer drift chamber were presented to the Intel ETANN chip. The network was trained to provide the intercept and slope of straight tracks traversing the chamber. The outputs were recorded and later compared off line to conventional track fits. Two types of network architectures were studied. Applications of neural network tracking to high energy physics detector triggers is discussed.

  12. Evolving Neural Network Architecture

    Science.gov (United States)

    1993-03-01

    associated with individual ADALINES . If better re-suits are obtained, then the new weight values ale kept; otherwise, the new weights are ignored. If...the training process exhausts trials involving a single ADALINE , pairwise (or higher) adaptations are attempted. The 3-bit parity problem has been

  13. Fuzzy neural network theory and application

    CERN Document Server

    Liu, Puyin

    2004-01-01

    This book systematically synthesizes research achievements in the field of fuzzy neural networks in recent years. It also provides a comprehensive presentation of the developments in fuzzy neural networks, with regard to theory as well as their application to system modeling and image restoration. Special emphasis is placed on the fundamental concepts and architecture analysis of fuzzy neural networks. The book is unique in treating all kinds of fuzzy neural networks and their learning algorithms and universal approximations, and employing simulation examples which are carefully designed to he

  14. Mobile networks architecture

    CERN Document Server

    Perez, Andre

    2013-01-01

    This book explains the evolutions of architecture for mobiles and summarizes the different technologies:- 2G: the GSM (Global System for Mobile) network, the GPRS (General Packet Radio Service) network and the EDGE (Enhanced Data for Global Evolution) evolution;- 3G: the UMTS (Universal Mobile Telecommunications System) network and the HSPA (High Speed Packet Access) evolutions:- HSDPA (High Speed Downlink Packet Access),- HSUPA (High Speed Uplink Packet Access),- HSPA+;- 4G: the EPS (Evolved Packet System) network.The telephone service and data transmission are the

  15. Oscillatory neurocomputing with ring attractors: a network architecture for mapping locations in space onto patterns of neural synchrony.

    Science.gov (United States)

    Blair, Hugh T; Wu, Allan; Cong, Jason

    2014-02-01

    Theories of neural coding seek to explain how states of the world are mapped onto states of the brain. Here, we compare how an animal's location in space can be encoded by two different kinds of brain states: population vectors stored by patterns of neural firing rates, versus synchronization vectors stored by patterns of synchrony among neural oscillators. It has previously been shown that a population code stored by spatially tuned 'grid cells' can exhibit desirable properties such as high storage capacity and strong fault tolerance; here it is shown that similar properties are attainable with a synchronization code stored by rhythmically bursting 'theta cells' that lack spatial tuning. Simulations of a ring attractor network composed from theta cells suggest how a synchronization code might be implemented using fewer neurons and synapses than a population code with similar storage capacity. It is conjectured that reciprocal connections between grid and theta cells might control phase noise to correct two kinds of errors that can arise in the code: path integration and teleportation errors. Based upon these analyses, it is proposed that a primary function of spatially tuned neurons might be to couple the phases of neural oscillators in a manner that allows them to encode spatial locations as patterns of neural synchrony.

  16. Neural Networks: Implementations and Applications

    NARCIS (Netherlands)

    Vonk, E.; Veelenturf, L.P.J.; Jain, L.C.

    1996-01-01

    Artificial neural networks, also called neural networks, have been used successfully in many fields including engineering, science and business. This paper presents the implementation of several neural network simulators and their applications in character recognition and other engineering areas

  17. Neural Networks: Implementations and Applications

    NARCIS (Netherlands)

    Vonk, E.; Veelenturf, L.P.J.; Jain, L.C.

    1996-01-01

    Artificial neural networks, also called neural networks, have been used successfully in many fields including engineering, science and business. This paper presents the implementation of several neural network simulators and their applications in character recognition and other engineering areas

  18. Future Network Architectures

    DEFF Research Database (Denmark)

    Wessing, Henrik; Bozorgebrahimi, Kurosh; Belter, Bartosz;

    2015-01-01

    This study identifies key requirements for NRENs towards future network architectures that become apparent as users become more mobile and have increased expectations in terms of availability of data. In addition, cost saving requirements call for federated use of, in particular, the optical spec...

  19. Quantifying loopy network architectures.

    Directory of Open Access Journals (Sweden)

    Eleni Katifori

    Full Text Available Biology presents many examples of planar distribution and structural networks having dense sets of closed loops. An archetype of this form of network organization is the vasculature of dicotyledonous leaves, which showcases a hierarchically-nested architecture containing closed loops at many different levels. Although a number of approaches have been proposed to measure aspects of the structure of such networks, a robust metric to quantify their hierarchical organization is still lacking. We present an algorithmic framework, the hierarchical loop decomposition, that allows mapping loopy networks to binary trees, preserving in the connectivity of the trees the architecture of the original graph. We apply this framework to investigate computer generated graphs, such as artificial models and optimal distribution networks, as well as natural graphs extracted from digitized images of dicotyledonous leaves and vasculature of rat cerebral neocortex. We calculate various metrics based on the asymmetry, the cumulative size distribution and the Strahler bifurcation ratios of the corresponding trees and discuss the relationship of these quantities to the architectural organization of the original graphs. This algorithmic framework decouples the geometric information (exact location of edges and nodes from the metric topology (connectivity and edge weight and it ultimately allows us to perform a quantitative statistical comparison between predictions of theoretical models and naturally occurring loopy graphs.

  20. The Laplacian spectrum of neural networks.

    Science.gov (United States)

    de Lange, Siemon C; de Reus, Marcel A; van den Heuvel, Martijn P

    2014-01-13

    The brain is a complex network of neural interactions, both at the microscopic and macroscopic level. Graph theory is well suited to examine the global network architecture of these neural networks. Many popular graph metrics, however, encode average properties of individual network elements. Complementing these "conventional" graph metrics, the eigenvalue spectrum of the normalized Laplacian describes a network's structure directly at a systems level, without referring to individual nodes or connections. In this paper, the Laplacian spectra of the macroscopic anatomical neuronal networks of the macaque and cat, and the microscopic network of the Caenorhabditis elegans were examined. Consistent with conventional graph metrics, analysis of the Laplacian spectra revealed an integrative community structure in neural brain networks. Extending previous findings of overlap of network attributes across species, similarity of the Laplacian spectra across the cat, macaque and C. elegans neural networks suggests a certain level of consistency in the overall architecture of the anatomical neural networks of these species. Our results further suggest a specific network class for neural networks, distinct from conceptual small-world and scale-free models as well as several empirical networks.

  1. The Laplacian spectrum of neural networks

    Science.gov (United States)

    de Lange, Siemon C.; de Reus, Marcel A.; van den Heuvel, Martijn P.

    2014-01-01

    The brain is a complex network of neural interactions, both at the microscopic and macroscopic level. Graph theory is well suited to examine the global network architecture of these neural networks. Many popular graph metrics, however, encode average properties of individual network elements. Complementing these “conventional” graph metrics, the eigenvalue spectrum of the normalized Laplacian describes a network's structure directly at a systems level, without referring to individual nodes or connections. In this paper, the Laplacian spectra of the macroscopic anatomical neuronal networks of the macaque and cat, and the microscopic network of the Caenorhabditis elegans were examined. Consistent with conventional graph metrics, analysis of the Laplacian spectra revealed an integrative community structure in neural brain networks. Extending previous findings of overlap of network attributes across species, similarity of the Laplacian spectra across the cat, macaque and C. elegans neural networks suggests a certain level of consistency in the overall architecture of the anatomical neural networks of these species. Our results further suggest a specific network class for neural networks, distinct from conceptual small-world and scale-free models as well as several empirical networks. PMID:24454286

  2. NeuCube: a spiking neural network architecture for mapping, learning and understanding of spatio-temporal brain data.

    Science.gov (United States)

    Kasabov, Nikola K

    2014-04-01

    The brain functions as a spatio-temporal information processing machine. Spatio- and spectro-temporal brain data (STBD) are the most commonly collected data for measuring brain response to external stimuli. An enormous amount of such data has been already collected, including brain structural and functional data under different conditions, molecular and genetic data, in an attempt to make a progress in medicine, health, cognitive science, engineering, education, neuro-economics, Brain-Computer Interfaces (BCI), and games. Yet, there is no unifying computational framework to deal with all these types of data in order to better understand this data and the processes that generated it. Standard machine learning techniques only partially succeeded and they were not designed in the first instance to deal with such complex data. Therefore, there is a need for a new paradigm to deal with STBD. This paper reviews some methods of spiking neural networks (SNN) and argues that SNN are suitable for the creation of a unifying computational framework for learning and understanding of various STBD, such as EEG, fMRI, genetic, DTI, MEG, and NIRS, in their integration and interaction. One of the reasons is that SNN use the same computational principle that generates STBD, namely spiking information processing. This paper introduces a new SNN architecture, called NeuCube, for the creation of concrete models to map, learn and understand STBD. A NeuCube model is based on a 3D evolving SNN that is an approximate map of structural and functional areas of interest of the brain related to the modeling STBD. Gene information is included optionally in the form of gene regulatory networks (GRN) if this is relevant to the problem and the data. A NeuCube model learns from STBD and creates connections between clusters of neurons that manifest chains (trajectories) of neuronal activity. Once learning is applied, a NeuCube model can reproduce these trajectories, even if only part of the input

  3. The Physics of Neural Networks

    Science.gov (United States)

    Gutfreund, Hanoch; Toulouse, Gerard

    The following sections are included: * Introduction * Historical Perspective * Why Statistical Physics? * Purpose and Outline of the Paper * Basic Elements of Neural Network Models * The Biological Neuron * From the Biological to the Formal Neuron * The Formal Neuron * Network Architecture * Network Dynamics * Basic Functions of Neural Network Models * Associative Memory * Learning * Categorization * Generalization * Optimization * The Hopfield Model * Solution of the Model * The Merit of the Hopfield Model * Beyond the Standard Model * The Gardner Approach * A Microcanonical Formulation * The Case of Biased Patterns * A Canonical Formulation * Constraints on the Synaptic Weights * Learning with Errors * Learning with Noise * Hierarchically Correlated Data and Categorization * Hierarchical Data Structures * Storage of Hierarchical Data Structures * Categorization * Generalization * Learning a Classification Task * The Reference Perceptron Problem * The Contiguity Problem * Discussion - Issues of Relevance * The Notion of Attractors and Modes of Computation * The Nature of Attractors * Temporal versus Spatial Coding * Acknowledgements * References

  4. Dissociated emergent-response system and fine-processing system in human neural network and a heuristic neural architecture for autonomous humanoid robots.

    Science.gov (United States)

    Yan, Xiaodan

    2010-01-01

    The current study investigated the functional connectivity of the primary sensory system with resting state fMRI and applied such knowledge into the design of the neural architecture of autonomous humanoid robots. Correlation and Granger causality analyses were utilized to reveal the functional connectivity patterns. Dissociation was within the primary sensory system, in that the olfactory cortex and the somatosensory cortex were strongly connected to the amygdala whereas the visual cortex and the auditory cortex were strongly connected with the frontal cortex. The posterior cingulate cortex (PCC) and the anterior cingulate cortex (ACC) were found to maintain constant communication with the primary sensory system, the frontal cortex, and the amygdala. Such neural architecture inspired the design of dissociated emergent-response system and fine-processing system in autonomous humanoid robots, with separate processing units and another consolidation center to coordinate the two systems. Such design can help autonomous robots to detect and respond quickly to danger, so as to maintain their sustainability and independence.

  5. Dissociated Emergent-Response System and Fine-Processing System in Human Neural Network and a Heuristic Neural Architecture for Autonomous Humanoid Robots

    Directory of Open Access Journals (Sweden)

    Xiaodan Yan

    2010-01-01

    Full Text Available The current study investigated the functional connectivity of the primary sensory system with resting state fMRI and applied such knowledge into the design of the neural architecture of autonomous humanoid robots. Correlation and Granger causality analyses were utilized to reveal the functional connectivity patterns. Dissociation was within the primary sensory system, in that the olfactory cortex and the somatosensory cortex were strongly connected to the amygdala whereas the visual cortex and the auditory cortex were strongly connected with the frontal cortex. The posterior cingulate cortex (PCC and the anterior cingulate cortex (ACC were found to maintain constant communication with the primary sensory system, the frontal cortex, and the amygdala. Such neural architecture inspired the design of dissociated emergent-response system and fine-processing system in autonomous humanoid robots, with separate processing units and another consolidation center to coordinate the two systems. Such design can help autonomous robots to detect and respond quickly to danger, so as to maintain their sustainability and independence.

  6. Hidden neural networks

    DEFF Research Database (Denmark)

    Krogh, Anders Stærmose; Riis, Søren Kamaric

    1999-01-01

    A general framework for hybrids of hidden Markov models (HMMs) and neural networks (NNs) called hidden neural networks (HNNs) is described. The article begins by reviewing standard HMMs and estimation by conditional maximum likelihood, which is used by the HNN. In the HNN, the usual HMM probability...... parameters are replaced by the outputs of state-specific neural networks. As opposed to many other hybrids, the HNN is normalized globally and therefore has a valid probabilistic interpretation. All parameters in the HNN are estimated simultaneously according to the discriminative conditional maximum...... likelihood criterion. The HNN can be viewed as an undirected probabilistic independence network (a graphical model), where the neural networks provide a compact representation of the clique functions. An evaluation of the HNN on the task of recognizing broad phoneme classes in the TIMIT database shows clear...

  7. Prediction based chaos control via a new neural network

    Energy Technology Data Exchange (ETDEWEB)

    Shen Liqun [School of Electrical Engineering and Automation, Harbin Institute of Technology, Harbin 150001 (China)], E-mail: liqunshen@gmail.com; Wang Mao [Space Control and Inertia Technology Research Center, Harbin Institute of Technology, Harbin 150001 (China); Liu Wanyu [School of Electrical Engineering and Automation, Harbin Institute of Technology, Harbin 150001 (China); Sun Guanghui [Space Control and Inertia Technology Research Center, Harbin Institute of Technology, Harbin 150001 (China)

    2008-11-17

    In this Letter, a new chaos control scheme based on chaos prediction is proposed. To perform chaos prediction, a new neural network architecture for complex nonlinear approximation is proposed. And the difficulty in building and training the neural network is also reduced. Simulation results of Logistic map and Lorenz system show the effectiveness of the proposed chaos control scheme and the proposed neural network.

  8. Critical Branching Neural Networks

    Science.gov (United States)

    Kello, Christopher T.

    2013-01-01

    It is now well-established that intrinsic variations in human neural and behavioral activity tend to exhibit scaling laws in their fluctuations and distributions. The meaning of these scaling laws is an ongoing matter of debate between isolable causes versus pervasive causes. A spiking neural network model is presented that self-tunes to critical…

  9. Critical Branching Neural Networks

    Science.gov (United States)

    Kello, Christopher T.

    2013-01-01

    It is now well-established that intrinsic variations in human neural and behavioral activity tend to exhibit scaling laws in their fluctuations and distributions. The meaning of these scaling laws is an ongoing matter of debate between isolable causes versus pervasive causes. A spiking neural network model is presented that self-tunes to critical…

  10. Neural networks and graph theory

    Institute of Scientific and Technical Information of China (English)

    许进; 保铮

    2002-01-01

    The relationships between artificial neural networks and graph theory are considered in detail. The applications of artificial neural networks to many difficult problems of graph theory, especially NP-complete problems, and the applications of graph theory to artificial neural networks are discussed. For example graph theory is used to study the pattern classification problem on the discrete type feedforward neural networks, and the stability analysis of feedback artificial neural networks etc.

  11. Estimation of concrete compressive strength using artificial neural network

    OpenAIRE

    Kostić, Srđan; Vasović, Dejan

    2015-01-01

    In present paper, concrete compressive strength is evaluated using back propagation feed-forward artificial neural network. Training of neural network is performed using Levenberg-Marquardt learning algorithm for four architectures of artificial neural networks, one, three, eight and twelve nodes in a hidden layer in order to avoid the occurrence of overfitting. Training, validation and testing of neural network is conducted for 75 concrete samples with distinct w/c ratio and amount of superp...

  12. Artificial neural networks a practical course

    CERN Document Server

    da Silva, Ivan Nunes; Andrade Flauzino, Rogerio; Liboni, Luisa Helena Bartocci; dos Reis Alves, Silas Franco

    2017-01-01

    This book provides comprehensive coverage of neural networks, their evolution, their structure, the problems they can solve, and their applications. The first half of the book looks at theoretical investigations on artificial neural networks and addresses the key architectures that are capable of implementation in various application scenarios. The second half is designed specifically for the production of solutions using artificial neural networks to solve practical problems arising from different areas of knowledge. It also describes the various implementation details that were taken into account to achieve the reported results. These aspects contribute to the maturation and improvement of experimental techniques to specify the neural network architecture that is most appropriate for a particular application scope. The book is appropriate for students in graduate and upper undergraduate courses in addition to researchers and professionals.

  13. A C-LSTM Neural Network for Text Classification

    OpenAIRE

    Zhou, Chunting; Sun, Chonglin; Liu, Zhiyuan; Lau, Francis C. M.

    2015-01-01

    Neural network models have been demonstrated to be capable of achieving remarkable performance in sentence and document modeling. Convolutional neural network (CNN) and recurrent neural network (RNN) are two mainstream architectures for such modeling tasks, which adopt totally different ways of understanding natural languages. In this work, we combine the strengths of both architectures and propose a novel and unified model called C-LSTM for sentence representation and text classification. C-...

  14. Towards a networkArchitecture

    DEFF Research Database (Denmark)

    Rüdiger, Bjarne; Tournay, Bruno

    2001-01-01

    Planche, bidrag til DAL-konkurrencen. Hvor industrien har været inspirationen for udviklingen af den moderne arkitektur, er IT det tekniske og æstetiske grundlag for den spirende NetworkArchitecture. Computeren og netværker af computerne er således mere end en metafor for NetworkArchitecture....... NetworkArchitecture består af intelligente byggekomponenter forbundet med hinanden i et netværk og i interaktion med omgivelser....

  15. Neural-Network Object-Recognition Program

    Science.gov (United States)

    Spirkovska, L.; Reid, M. B.

    1993-01-01

    HONTIOR computer program implements third-order neural network exhibiting invariance under translation, change of scale, and in-plane rotation. Invariance incorporated directly into architecture of network. Only one view of each object needed to train network for two-dimensional-translation-invariant recognition of object. Also used for three-dimensional-transformation-invariant recognition by training network on only set of out-of-plane rotated views. Written in C language.

  16. 0.8 /spl mu/m CMOS implementation of weighted-order statistic image filter based on cellular neural network architecture.

    Science.gov (United States)

    Kowalski, J

    2003-01-01

    In this paper, a very large scale integration chip of an analog image weighted-order statistic (WOS) filter based on cellular neural network (CNN) architecture for real-time applications is described. The chip has been implemented in CMOS AMS 0.8 /spl mu/m technology. CNN-based filter consists of feedforward nonlinear template B operating within the window of 3 /spl times/ 3 pixels around the central pixel being filtered. The feedforward nonlinear CNN coefficients have been realized using programmable nonlinear coupler circuits. The WOS filter chip allows for processing of images with 300 pixels horizontal resolution. The resolution can be increased by cascading of the chips. Experimental results of basic circuit building blocks measurements are presented. Functional tests of the chip have been performed using a special test setup for PAL composite video signal processing. Using the setup real images have been filtered by WOS filter chip under test.

  17. Semantic segmentation of bioimages using convolutional neural networks

    CSIR Research Space (South Africa)

    Wiehman, S

    2016-07-01

    Full Text Available Convolutional neural networks have shown great promise in both general image segmentation problems as well as bioimage segmentation. In this paper, the application of different convolutional network architectures is explored on the C. elegans live...

  18. 动态自适应模块化神经网络结构设计%Dynamic adaptive modular neural network architecture design

    Institute of Scientific and Technical Information of China (English)

    张昭昭

    2014-01-01

    Due to the fact that the fully coupled feedforward neural network can not effectively deal with the problem of time-varying systems, a dynamic adaptive modular neural network model is proposed. In this model, the substractive cluster algorithm is applied to online identification of the spatial distribution of the condition data. RBF neurons are used to decompose the learning sample space and combined with fuzzy strategy to dynamically allocate different sub-sample space learning data to different sub-networks. Finally, the output of the modular neural network can be achieved by integrating the output of the sub-networks. The number of the sub-networks and the architecture of the subnet-works can be adaptively adjusted based on the current learning time-varying task. Experiment results on different time-varying systems show that proposed model can effectively tracke the time-varying system.%针对全连接前馈神经网络不能有效应对时变系统的问题,提出一种动态自适应模块化神经网络结构。该网络采用减法聚类算法在线辨识工况数据的空间分布,利用RBF神经元实现对数据样本空间的划分,并结合模糊策略将不同子样本空间的数据动态分配给不同的子网络,最后对各子网络的输出进行集成。该模块化网络中子网络数量和子网络规模都能根据所学时变任务动态自适应调整。通过对不同时变系统的预测表明了该网络能够有效跟踪时变系统。

  19. Fuzzy Multiresolution Neural Networks

    Science.gov (United States)

    Ying, Li; Qigang, Shang; Na, Lei

    A fuzzy multi-resolution neural network (FMRANN) based on particle swarm algorithm is proposed to approximate arbitrary nonlinear function. The active function of the FMRANN consists of not only the wavelet functions, but also the scaling functions, whose translation parameters and dilation parameters are adjustable. A set of fuzzy rules are involved in the FMRANN. Each rule either corresponding to a subset consists of scaling functions, or corresponding to a sub-wavelet neural network consists of wavelets with same dilation parameters. Incorporating the time-frequency localization and multi-resolution properties of wavelets with the ability of self-learning of fuzzy neural network, the approximation ability of FMRANN can be remarkable improved. A particle swarm algorithm is adopted to learn the translation and dilation parameters of the wavelets and adjusting the shape of membership functions. Simulation examples are presented to validate the effectiveness of FMRANN.

  20. Exploiting network redundancy for low-cost neural network realizations.

    NARCIS (Netherlands)

    Keegstra, H; Jansen, WJ; Nijhuis, JAG; Spaanenburg, L; Stevens, H; Udding, JT

    1996-01-01

    A method is presented to optimize a trained neural network for physical realization styles. Target architectures are embedded microcontrollers or standard cell based ASIC designs. The approach exploits the redundancy in the network, required for successful training, to replace the synaptic weighting

  1. Rule Extraction:Using Neural Networks or for Neural Networks?

    Institute of Scientific and Technical Information of China (English)

    Zhi-Hua Zhou

    2004-01-01

    In the research of rule extraction from neural networks, fidelity describes how well the rules mimic the behavior of a neural network while accuracy describes how well the rules can be generalized. This paper identifies the fidelity-accuracy dilemma. It argues to distinguish rule extraction using neural networks and rule extraction for neural networks according to their different goals, where fidelity and accuracy should be excluded from the rule quality evaluation framework, respectively.

  2. One-day-ahead streamflow forecasting via super-ensembles of several neural network architectures based on the Multi-Level Diversity Model

    Science.gov (United States)

    Brochero, Darwin; Hajji, Islem; Pina, Jasson; Plana, Queralt; Sylvain, Jean-Daniel; Vergeynst, Jenna; Anctil, Francois

    2015-04-01

    Theories about generalization error with ensembles are mainly based on the diversity concept, which promotes resorting to many members of different properties to support mutually agreeable decisions. Kuncheva (2004) proposed the Multi Level Diversity Model (MLDM) to promote diversity in model ensembles, combining different data subsets, input subsets, models, parameters, and including a combiner level in order to optimize the final ensemble. This work tests the hypothesis about the minimisation of the generalization error with ensembles of Neural Network (NN) structures. We used the MLDM to evaluate two different scenarios: (i) ensembles from a same NN architecture, and (ii) a super-ensemble built by a combination of sub-ensembles of many NN architectures. The time series used correspond to the 12 basins of the MOdel Parameter Estimation eXperiment (MOPEX) project that were used by Duan et al. (2006) and Vos (2013) as benchmark. Six architectures are evaluated: FeedForward NN (FFNN) trained with the Levenberg Marquardt algorithm (Hagan et al., 1996), FFNN trained with SCE (Duan et al., 1993), Recurrent NN trained with a complex method (Weins et al., 2008), Dynamic NARX NN (Leontaritis and Billings, 1985), Echo State Network (ESN), and leak integrator neuron (L-ESN) (Lukosevicius and Jaeger, 2009). Each architecture performs separately an Input Variable Selection (IVS) according to a forward stepwise selection (Anctil et al., 2009) using mean square error as objective function. Post-processing by Predictor Stepwise Selection (PSS) of the super-ensemble has been done following the method proposed by Brochero et al. (2011). IVS results showed that the lagged stream flow, lagged precipitation, and Standardized Precipitation Index (SPI) (McKee et al., 1993) were the most relevant variables. They were respectively selected as one of the firsts three selected variables in 66, 45, and 28 of the 72 scenarios. A relationship between aridity index (Arora, 2002) and NN

  3. Long Short-Term Memory Projection Recurrent Neural Network Architectures for Piano’s Continuous Note Recognition

    Directory of Open Access Journals (Sweden)

    YuKang Jia

    2017-01-01

    Full Text Available Long Short-Term Memory (LSTM is a kind of Recurrent Neural Networks (RNN relating to time series, which has achieved good performance in speech recogniton and image recognition. Long Short-Term Memory Projection (LSTMP is a variant of LSTM to further optimize speed and performance of LSTM by adding a projection layer. As LSTM and LSTMP have performed well in pattern recognition, in this paper, we combine them with Connectionist Temporal Classification (CTC to study piano’s continuous note recognition for robotics. Based on the Beijing Forestry University music library, we conduct experiments to show recognition rates and numbers of iterations of LSTM with a single layer, LSTMP with a single layer, and Deep LSTM (DLSTM, LSTM with multilayers. As a result, the single layer LSTMP proves performing much better than the single layer LSTM in both time and the recognition rate; that is, LSTMP has fewer parameters and therefore reduces the training time, and, moreover, benefiting from the projection layer, LSTMP has better performance, too. The best recognition rate of LSTMP is 99.8%. As for DLSTM, the recognition rate can reach 100% because of the effectiveness of the deep structure, but compared with the single layer LSTMP, DLSTM needs more training time.

  4. Architecture in the network society

    DEFF Research Database (Denmark)

    2004-01-01

    Under the theme Architecture in the Network Society, participants were invited to focus on the dialog and sharing of knowledge between architects and other disciplines and to reflect on, and propose, new methods in the design process, to enhance and improve the impact of information technology...... on architecture. This conference and the past history of eCAADe is an example on establishing a social network for the sharing of knowledge regarding the use of computers in architectural education and research....

  5. Introduction to Artificial Neural Networks

    DEFF Research Database (Denmark)

    Larsen, Jan

    1999-01-01

    The note addresses introduction to signal analysis and classification based on artificial feed-forward neural networks.......The note addresses introduction to signal analysis and classification based on artificial feed-forward neural networks....

  6. Machine learning on-a-chip: a high-performance low-power reusable neuron architecture for artificial neural networks in ECG classifications.

    Science.gov (United States)

    Sun, Yuwen; Cheng, Allen C

    2012-07-01

    Artificial neural networks (ANNs) are a promising machine learning technique in classifying non-linear electrocardiogram (ECG) signals and recognizing abnormal patterns suggesting risks of cardiovascular diseases (CVDs). In this paper, we propose a new reusable neuron architecture (RNA) enabling a performance-efficient and cost-effective silicon implementation for ANN. The RNA architecture consists of a single layer of physical RNA neurons, each of which is designed to use minimal hardware resource (e.g., a single 2-input multiplier-accumulator is used to compute the dot product of two vectors). By carefully applying the principal of time sharing, RNA can multiplexs this single layer of physical neurons to efficiently execute both feed-forward and back-propagation computations of an ANN while conserving the area and reducing the power dissipation of the silicon. A three-layer 51-30-12 ANN is implemented in RNA to perform the ECG classification for CVD detection. This RNA hardware also allows on-chip automatic training update. A quantitative design space exploration in area, power dissipation, and execution speed between RNA and three other implementations representative of different reusable hardware strategies is presented and discussed. Compared with an equivalent software implementation in C executed on an embedded microprocessor, the RNA ASIC achieves three orders of magnitude improvements in both the execution speed and the energy efficiency.

  7. Artificial neural network modelling

    CERN Document Server

    Samarasinghe, Sandhya

    2016-01-01

    This book covers theoretical aspects as well as recent innovative applications of Artificial Neural networks (ANNs) in natural, environmental, biological, social, industrial and automated systems. It presents recent results of ANNs in modelling small, large and complex systems under three categories, namely, 1) Networks, Structure Optimisation, Robustness and Stochasticity 2) Advances in Modelling Biological and Environmental Systems and 3) Advances in Modelling Social and Economic Systems. The book aims at serving undergraduates, postgraduates and researchers in ANN computational modelling. .

  8. Critical branching neural networks.

    Science.gov (United States)

    Kello, Christopher T

    2013-01-01

    It is now well-established that intrinsic variations in human neural and behavioral activity tend to exhibit scaling laws in their fluctuations and distributions. The meaning of these scaling laws is an ongoing matter of debate between isolable causes versus pervasive causes. A spiking neural network model is presented that self-tunes to critical branching and, in doing so, simulates observed scaling laws as pervasive to neural and behavioral activity. These scaling laws are related to neural and cognitive functions, in that critical branching is shown to yield spiking activity with maximal memory and encoding capacities when analyzed using reservoir computing techniques. The model is also shown to account for findings of pervasive 1/f scaling in speech and cued response behaviors that are difficult to explain by isolable causes. Issues and questions raised by the model and its results are discussed from the perspectives of physics, neuroscience, computer and information sciences, and psychological and cognitive sciences.

  9. Generalized Adaptive Artificial Neural Networks

    Science.gov (United States)

    Tawel, Raoul

    1993-01-01

    Mathematical model of supervised learning by artificial neural network provides for simultaneous adjustments of both temperatures of neurons and synaptic weights, and includes feedback as well as feedforward synaptic connections. Extension of mathematical model described in "Adaptive Neurons For Artificial Neural Networks" (NPO-17803). Dynamics of neural network represented in new model by less-restrictive continuous formalism.

  10. Research of The Deeper Neural Networks

    Directory of Open Access Journals (Sweden)

    Xiao You Rong

    2016-01-01

    Full Text Available Neural networks (NNs have powerful computational abilities and could be used in a variety of applications; however, training these networks is still a difficult problem. With different network structures, many neural models have been constructed. In this report, a deeper neural networks (DNNs architecture is proposed. The training algorithm of deeper neural network insides searching the global optimal point in the actual error surface. Before the training algorithm is designed, the error surface of the deeper neural network is analyzed from simple to complicated, and the features of the error surface is obtained. Based on these characters, the initialization method and training algorithm of DNNs is designed. For the initialization, a block-uniform design method is proposed which separates the error surface into some blocks and finds the optimal block using the uniform design method. For the training algorithm, the improved gradient-descent method is proposed which adds a penalty term into the cost function of the old gradient descent method. This algorithm makes the network have a great approximating ability and keeps the network state stable. All of these improve the practicality of the neural network.

  11. Neural simulations on multi-core architectures

    Directory of Open Access Journals (Sweden)

    Hubert Eichner

    2009-07-01

    Full Text Available Neuroscience is witnessing increasing knowledge about the anatomy and electrophysiological properties of neurons and their connectivity, leading to an ever increasing computational complexity of neural simulations. At the same time, a rather radical change in personal computer technology emerges with the establishment of multi-cores: high-density, explicitly parallel processor architectures for both high performance as well as standard desktop computers. This work introduces strategies for the parallelization of biophysically realistic neural simulations based on the compartmental modeling technique and results of such an implementation, with a strong focus on multi-core architectures and automation, i. e. user-transparent load balancing.

  12. Quantum Neural Networks

    CERN Document Server

    Gupta, S; Gupta, Sanjay

    2002-01-01

    This paper initiates the study of quantum computing within the constraints of using a polylogarithmic ($O(\\log^k n), k\\geq 1$) number of qubits and a polylogarithmic number of computation steps. The current research in the literature has focussed on using a polynomial number of qubits. A new mathematical model of computation called \\emph{Quantum Neural Networks (QNNs)} is defined, building on Deutsch's model of quantum computational network. The model introduces a nonlinear and irreversible gate, similar to the speculative operator defined by Abrams and Lloyd. The precise dynamics of this operator are defined and while giving examples in which nonlinear Schr\\"{o}dinger's equations are applied, we speculate on its possible implementation. The many practical problems associated with the current model of quantum computing are alleviated in the new model. It is shown that QNNs of logarithmic size and constant depth have the same computational power as threshold circuits, which are used for modeling neural network...

  13. Interval probabilistic neural network.

    Science.gov (United States)

    Kowalski, Piotr A; Kulczycki, Piotr

    2017-01-01

    Automated classification systems have allowed for the rapid development of exploratory data analysis. Such systems increase the independence of human intervention in obtaining the analysis results, especially when inaccurate information is under consideration. The aim of this paper is to present a novel approach, a neural networking, for use in classifying interval information. As presented, neural methodology is a generalization of probabilistic neural network for interval data processing. The simple structure of this neural classification algorithm makes it applicable for research purposes. The procedure is based on the Bayes approach, ensuring minimal potential losses with regard to that which comes about through classification errors. In this article, the topological structure of the network and the learning process are described in detail. Of note, the correctness of the procedure proposed here has been verified by way of numerical tests. These tests include examples of both synthetic data, as well as benchmark instances. The results of numerical verification, carried out for different shapes of data sets, as well as a comparative analysis with other methods of similar conditioning, have validated both the concept presented here and its positive features.

  14. Hopfield Neural Network Approach to Clustering in Mobile Radio Networks

    Institute of Scientific and Technical Information of China (English)

    JiangYan; LiChengshu

    1995-01-01

    In this paper ,the Hopfield neural network(NN) algorithm is developed for selecting gateways in cluster linkage.The linked cluster(LC) architecture is assumed to achieve distributed network control in multihop radio networks throrgh the local controllers,called clusterheads and the nodes connecting these clusterheads are defined to be gateways.In Hopfield NN models ,the most critical issue being the determination of connection weights,we use the approach of Lagrange multipliers(LM) for its dynamic nature.

  15. Network architecture as internet governance

    Directory of Open Access Journals (Sweden)

    Francesca Musiani

    2013-10-01

    Full Text Available The architecture of a networked system is its underlying technical and logical structure, including transmission equipment, communication protocols, infrastructure, and connectivity between its components or nodes. This article introduces the idea of network architecture as internet governance, and more specifically, it outlines the dialectic between centralised and distributed architectures, institutions and practices, and how they mutually affect each other. The article argues that network architecture is internet governance in the sense that, by changing the design of the networks subtending internet-based services and the global internet itself, its politics are affected – the balance of rights between users and providers, the capacity of online communities to engage in open and direct interaction, the fair competition between actors of the internet market.

  16. Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Kapil Nahar

    2012-12-01

    Full Text Available An artificial neural network is an information-processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information.The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurons working in unison to solve specific problems.Ann’s, like people, learn by example.

  17. Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Kapil Nahar

    2012-12-01

    Full Text Available An artificial neural network is an information-processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurons working in unison to solve specific problems. Ann’s, like people, learn by example.

  18. A dual neural network for convex quadratic programming subject to linear equality and inequality constraints

    Science.gov (United States)

    Zhang, Yunong; Wang, Jun

    2002-06-01

    A recurrent neural network called the dual neural network is proposed in this Letter for solving the strictly convex quadratic programming problems. Compared to other recurrent neural networks, the proposed dual network with fewer neurons can solve quadratic programming problems subject to equality, inequality, and bound constraints. The dual neural network is shown to be globally exponentially convergent to optimal solutions of quadratic programming problems. In addition, compared to neural networks containing high-order nonlinear terms, the dynamic equation of the proposed dual neural network is piecewise linear, and the network architecture is thus much simpler. The global convergence behavior of the dual neural network is demonstrated by an illustrative numerical example.

  19. Compressing Neural Networks with the Hashing Trick

    OpenAIRE

    Chen, Wenlin; Wilson, James T.; Tyree, Stephen; Weinberger, Kilian Q.; Chen, Yixin

    2015-01-01

    As deep nets are increasingly used in applications suited for mobile devices, a fundamental dilemma becomes apparent: the trend in deep learning is to grow models to absorb ever-increasing data set sizes; however mobile devices are designed with very little memory and cannot store such large models. We present a novel network architecture, HashedNets, that exploits inherent redundancy in neural networks to achieve drastic reductions in model sizes. HashedNets uses a low-cost hash function to ...

  20. VOLTAGE COMPENSATION USING ARTIFICIAL NEURAL NETWORK

    African Journals Online (AJOL)

    VOLTAGE COMPENSATION USING ARTIFICIAL NEURAL NETWORK: A CASE STUDY OF RUMUOLA DISTRIBUTION NETWORK. ... The artificial neural networks controller engaged to controlling the dynamic voltage ... Article Metrics.

  1. Neural networks for segmentation, tracking, and identification

    Science.gov (United States)

    Rogers, Steven K.; Ruck, Dennis W.; Priddy, Kevin L.; Tarr, Gregory L.

    1992-09-01

    The main thrust of this paper is to encourage the use of neural networks to process raw data for subsequent classification. This article addresses neural network techniques for processing raw pixel information. For this paper the definition of neural networks includes the conventional artificial neural networks such as the multilayer perceptrons and also biologically inspired processing techniques. Previously, we have successfully used the biologically inspired Gabor transform to process raw pixel information and segment images. In this paper we extend those ideas to both segment and track objects in multiframe sequences. It is also desirable for the neural network processing data to learn features for subsequent recognition. A common first step for processing raw data is to transform the data and use the transform coefficients as features for recognition. For example, handwritten English characters become linearly separable in the feature space of the low frequency Fourier coefficients. Much of human visual perception can be modelled by assuming low frequency Fourier as the feature space used by the human visual system. The optimum linear transform, with respect to reconstruction, is the Karhunen-Loeve transform (KLT). It has been shown that some neural network architectures can compute approximations to the KLT. The KLT coefficients can be used for recognition as well as for compression. We tested the use of the KLT on the problem of interfacing a nonverbal patient to a computer. The KLT uses an optimal basis set for object reconstruction. For object recognition, the KLT may not be optimal.

  2. Livermore Big Artificial Neural Network Toolkit

    Energy Technology Data Exchange (ETDEWEB)

    2016-07-01

    LBANN is a toolkit that is designed to train artificial neural networks efficiently on high performance computing architectures. It is optimized to take advantages of key High Performance Computing features to accelerate neural network training. Specifically it is optimized for low-latency, high bandwidth interconnects, node-local NVRAM, node-local GPU accelerators, and high bandwidth parallel file systems. It is built on top of the open source Elemental distributed-memory dense and spars-direct linear algebra and optimization library that is released under the BSD license. The algorithms contained within LBANN are drawn from the academic literature and implemented to work within a distributed-memory framework.

  3. Human Face Recognition Using Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Răzvan-Daniel Albu

    2009-10-01

    Full Text Available In this paper, I present a novel hybrid face recognition approach based on a convolutional neural architecture, designed to robustly detect highly variable face patterns. The convolutional network extracts successively larger features in a hierarchical set of layers. With the weights of the trained neural networks there are created kernel windows used for feature extraction in a 3-stage algorithm. I present experimental results illustrating the efficiency of the proposed approach. I use a database of 796 images of 159 individuals from Reims University which contains quite a high degree of variability in expression, pose, and facial details.

  4. Trimaran Resistance Artificial Neural Network

    Science.gov (United States)

    2011-01-01

    11th International Conference on Fast Sea Transportation FAST 2011, Honolulu, Hawaii, USA, September 2011 Trimaran Resistance Artificial Neural Network Richard...Trimaran Resistance Artificial Neural Network 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e... Artificial Neural Network and is restricted to the center and side-hull configurations tested. The value in the parametric model is that it is able to

  5. Artificial astrocytes improve neural network performance.

    Science.gov (United States)

    Porto-Pazos, Ana B; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-04-19

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function.

  6. [Artificial neural networks in Neurosciences].

    Science.gov (United States)

    Porras Chavarino, Carmen; Salinas Martínez de Lecea, José María

    2011-11-01

    This article shows that artificial neural networks are used for confirming the relationships between physiological and cognitive changes. Specifically, we explore the influence of a decrease of neurotransmitters on the behaviour of old people in recognition tasks. This artificial neural network recognizes learned patterns. When we change the threshold of activation in some units, the artificial neural network simulates the experimental results of old people in recognition tasks. However, the main contributions of this paper are the design of an artificial neural network and its operation inspired by the nervous system and the way the inputs are coded and the process of orthogonalization of patterns.

  7. via dynamic neural networks

    Directory of Open Access Journals (Sweden)

    J. Reyes-Reyes

    2000-01-01

    Full Text Available In this paper, an adaptive technique is suggested to provide the passivity property for a class of partially known SISO nonlinear systems. A simple Dynamic Neural Network (DNN, containing only two neurons and without any hidden-layers, is used to identify the unknown nonlinear system. By means of a Lyapunov-like analysis the new learning law for this DNN, guarantying both successful identification and passivation effects, is derived. Based on this adaptive DNN model, an adaptive feedback controller, serving for wide class of nonlinear systems with an a priori incomplete model description, is designed. Two typical examples illustrate the effectiveness of the suggested approach.

  8. Network Analysis, Architecture, and Design

    CERN Document Server

    McCabe, James D

    2007-01-01

    Traditionally, networking has had little or no basis in analysis or architectural development, with designers relying on technologies they are most familiar with or being influenced by vendors or consultants. However, the landscape of networking has changed so that network services have now become one of the most important factors to the success of many third generation networks. It has become an important feature of the designer's job to define the problems that exist in his network, choose and analyze several optimization parameters during the analysis process, and then prioritize and evalua

  9. VLSI neural system architecture for finite ring recursive reduction.

    Science.gov (United States)

    Zhang, D; Jullien, G A

    1996-12-01

    The use of neural-like networks to implement finite ring computations has been presented in a previous paper. This paper develops efficient VLSI neural system architecture for the finite ring recursive reduction (FRRR), including module reduction, MSB carry iteration and feedforward processing. These techniques deal with the basic principles involved in constructing a FRRR, and their implementations are efficiently matched to the VLSI medium. Compared with the other structure models for finite ring computation (e.g. modification of binary arithmetic logic and bit-steered ROM's), the FRRR structure has the lowest area complexity in silicon while maintaining a high throughput rate. Examples of several implementations are used to illustrate the effectiveness of the FRRR architecture.

  10. Time Series Prediction based on Hybrid Neural Networks

    Directory of Open Access Journals (Sweden)

    S. A. Yarushev

    2016-01-01

    Full Text Available In this paper, we suggest to use hybrid approach to time series forecasting problem. In first part of paper, we create a literature review of time series forecasting methods based on hybrid neural networks and neuro-fuzzy approaches. Hybrid neural networks especially effective for specific types of applications such as forecasting or classification problem, in contrast to traditional monolithic neural networks. These classes of problems include problems with different characteristics in different modules. The main part of paper create a detailed overview of hybrid networks benefits, its architectures and performance under traditional neural networks. Hybrid neural networks models for time series forecasting are discussed in the paper. Experiments with modular neural networks are given.

  11. Neural networks for function approximation in nonlinear control

    Science.gov (United States)

    Linse, Dennis J.; Stengel, Robert F.

    1990-01-01

    Two neural network architectures are compared with a classical spline interpolation technique for the approximation of functions useful in a nonlinear control system. A standard back-propagation feedforward neural network and a cerebellar model articulation controller (CMAC) neural network are presented, and their results are compared with a B-spline interpolation procedure that is updated using recursive least-squares parameter identification. Each method is able to accurately represent a one-dimensional test function. Tradeoffs between size requirements, speed of operation, and speed of learning indicate that neural networks may be practical for identification and adaptation in a nonlinear control environment.

  12. Parameter estimation using compensatory neural networks

    Indian Academy of Sciences (India)

    M Sinha; P K Kalra; K Kumar

    2000-04-01

    Proposed here is a new neuron model, a basis for Compensatory Neural Network Architecture (CNNA), which not only reduces the total number of interconnections among neurons but also reduces the total computing time for training. The suggested model has properties of the basic neuron model as well as the higher neuron model (multiplicative aggregation function). It can adapt to standard neuron and higher order neuron, as well as a combination of the two. This approach is found to estimate the orbit with accuracy significantly better than Kalman Filter (KF) and Feedforward Multilayer Neural Network (FMNN) (also simply referred to as Artificial Neural Network, ANN) with lambda-gamma learning. The typical simulation runs also bring out the superiority of the proposed scheme over Kalman filter from the standpoint of computation time and the amount of data needed for the desired degree of estimated accuracy for the specific problem of orbit determination.

  13. Artificial neural network intelligent method for prediction

    Science.gov (United States)

    Trifonov, Roumen; Yoshinov, Radoslav; Pavlova, Galya; Tsochev, Georgi

    2017-09-01

    Accounting and financial classification and prediction problems are high challenge and researchers use different methods to solve them. Methods and instruments for short time prediction of financial operations using artificial neural network are considered. The methods, used for prediction of financial data as well as the developed forecasting system with neural network are described in the paper. The architecture of a neural network used four different technical indicators, which are based on the raw data and the current day of the week is presented. The network developed is used for forecasting movement of stock prices one day ahead and consists of an input layer, one hidden layer and an output layer. The training method is algorithm with back propagation of the error. The main advantage of the developed system is self-determination of the optimal topology of neural network, due to which it becomes flexible and more precise The proposed system with neural network is universal and can be applied to various financial instruments using only basic technical indicators as input data.

  14. Using neural networks for the synthesis of microwave devices

    Directory of Open Access Journals (Sweden)

    V. O. Adamenko

    2012-10-01

    Full Text Available In this work the advantages of neural networks for the synthesis of microwave devices are considered. The problems which can occur using neural networks as the universal approximating system in problems of frequency-selective microwave devices synthesis are characterized. The expediency of solving these problems by using ensemble of neural networks is substantiated. Offered the architecture group which used for the practical implementation metal dielectric microwave filters, taking into account their characteristics in the stop bands above and below the pass band. The expediency of the further investigation of the proposed architecture in problems of synthesis of material objects is shown.

  15. Digital implementation of shunting-inhibitory cellular neural network

    Science.gov (United States)

    Hammadou, Tarik; Bouzerdoum, Abdesselam; Bermak, Amine

    2000-05-01

    Shunting inhibition is a model of early visual processing which can provide contrast and edge enhancement, and dynamic range compression. An architecture of digital Shunting Inhibitory Cellular Neural Network for real time image processing is presented. The proposed architecture is intended to be used in a complete vision system for edge detection and image enhancement. The present hardware architecture, is modeled and simulated in VHDL. Simulation results show the functional validity of the proposed architecture.

  16. Analysis of neural networks

    CERN Document Server

    Heiden, Uwe

    1980-01-01

    The purpose of this work is a unified and general treatment of activity in neural networks from a mathematical pOint of view. Possible applications of the theory presented are indica­ ted throughout the text. However, they are not explored in de­ tail for two reasons : first, the universal character of n- ral activity in nearly all animals requires some type of a general approach~ secondly, the mathematical perspicuity would suffer if too many experimental details and empirical peculiarities were interspersed among the mathematical investigation. A guide to many applications is supplied by the references concerning a variety of specific issues. Of course the theory does not aim at covering all individual problems. Moreover there are other approaches to neural network theory (see e.g. Poggio-Torre, 1978) based on the different lev­ els at which the nervous system may be viewed. The theory is a deterministic one reflecting the average be­ havior of neurons or neuron pools. In this respect the essay is writt...

  17. An adaptive holographic implementation of a neural network

    Science.gov (United States)

    Downie, John D.; Hine, Butler P., III; Reid, Max B.

    1990-01-01

    A holographic implementation for neural networks is proposed and demonstrated as an alternative to the optical matrix-vector multiplier architecture. In comparison, the holographic architecture makes more efficient use of the system space-bandwidth product for certain types of neural networks. The principal network component is a thermoplastic hologram, used to provide both interconnection weights and beam direction. Given the updatable nature of this type of hologram, adaptivity or network learning is possible in the optical system. Two networks with fixed weights are experimentally implemented and verified, and for one of these examples the advantage of the holographic implementation with respect to the matrix-vector processor is demonstrated.

  18. Neural Networks for Optimal Control

    DEFF Research Database (Denmark)

    Sørensen, O.

    1995-01-01

    Two neural networks are trained to act as an observer and a controller, respectively, to control a non-linear, multi-variable process.......Two neural networks are trained to act as an observer and a controller, respectively, to control a non-linear, multi-variable process....

  19. Neural Networks for Optimal Control

    DEFF Research Database (Denmark)

    Sørensen, O.

    1995-01-01

    Two neural networks are trained to act as an observer and a controller, respectively, to control a non-linear, multi-variable process.......Two neural networks are trained to act as an observer and a controller, respectively, to control a non-linear, multi-variable process....

  20. Brain tumor segmentation with Deep Neural Networks.

    Science.gov (United States)

    Havaei, Mohammad; Davy, Axel; Warde-Farley, David; Biard, Antoine; Courville, Aaron; Bengio, Yoshua; Pal, Chris; Jodoin, Pierre-Marc; Larochelle, Hugo

    2017-01-01

    In this paper, we present a fully automatic brain tumor segmentation method based on Deep Neural Networks (DNNs). The proposed networks are tailored to glioblastomas (both low and high grade) pictured in MR images. By their very nature, these tumors can appear anywhere in the brain and have almost any kind of shape, size, and contrast. These reasons motivate our exploration of a machine learning solution that exploits a flexible, high capacity DNN while being extremely efficient. Here, we give a description of different model choices that we've found to be necessary for obtaining competitive performance. We explore in particular different architectures based on Convolutional Neural Networks (CNN), i.e. DNNs specifically adapted to image data. We present a novel CNN architecture which differs from those traditionally used in computer vision. Our CNN exploits both local features as well as more global contextual features simultaneously. Also, different from most traditional uses of CNNs, our networks use a final layer that is a convolutional implementation of a fully connected layer which allows a 40 fold speed up. We also describe a 2-phase training procedure that allows us to tackle difficulties related to the imbalance of tumor labels. Finally, we explore a cascade architecture in which the output of a basic CNN is treated as an additional source of information for a subsequent CNN. Results reported on the 2013 BRATS test data-set reveal that our architecture improves over the currently published state-of-the-art while being over 30 times faster.

  1. Neural architecture design based on extreme learning machine.

    Science.gov (United States)

    Bueno-Crespo, Andrés; García-Laencina, Pedro J; Sancho-Gómez, José-Luis

    2013-12-01

    Selection of the optimal neural architecture to solve a pattern classification problem entails to choose the relevant input units, the number of hidden neurons and its corresponding interconnection weights. This problem has been widely studied in many research works but their solutions usually involve excessive computational cost in most of the problems and they do not provide a unique solution. This paper proposes a new technique to efficiently design the MultiLayer Perceptron (MLP) architecture for classification using the Extreme Learning Machine (ELM) algorithm. The proposed method provides a high generalization capability and a unique solution for the architecture design. Moreover, the selected final network only retains those input connections that are relevant for the classification task. Experimental results show these advantages. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Neural network technologies for image classification

    Science.gov (United States)

    Korikov, A. M.; Tungusova, A. V.

    2015-11-01

    We analyze the classes of problems with an objective necessity to use neural network technologies, i.e. representation and resolution problems in the neural network logical basis. Among these problems, image recognition takes an important place, in particular the classification of multi-dimensional data based on information about textural characteristics. These problems occur in aerospace and seismic monitoring, materials science, medicine and other. We reviewed different approaches for the texture description: statistical, structural, and spectral. We developed a neural network technology for resolving a practical problem of cloud image classification for satellite snapshots from the spectroradiometer MODIS. The cloud texture is described by the statistical characteristics of the GLCM (Gray Level Co- Occurrence Matrix) method. From the range of neural network models that might be applied for image classification, we chose the probabilistic neural network model (PNN) and developed an implementation which performs the classification of the main types and subtypes of clouds. Also, we chose experimentally the optimal architecture and parameters for the PNN model which is used for image classification.

  3. Financial Time Series Prediction Using Elman Recurrent Random Neural Networks.

    Science.gov (United States)

    Wang, Jie; Wang, Jun; Fang, Wen; Niu, Hongli

    2016-01-01

    In recent years, financial market dynamics forecasting has been a focus of economic research. To predict the price indices of stock markets, we developed an architecture which combined Elman recurrent neural networks with stochastic time effective function. By analyzing the proposed model with the linear regression, complexity invariant distance (CID), and multiscale CID (MCID) analysis methods and taking the model compared with different models such as the backpropagation neural network (BPNN), the stochastic time effective neural network (STNN), and the Elman recurrent neural network (ERNN), the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices.

  4. Impact of Mutation Weights on Training Backpropagation Neural Networks

    Directory of Open Access Journals (Sweden)

    Lamia Abed Noor Muhammed

    2014-07-01

    Full Text Available Neural network is a computational approach, which based on the simulation of biology neural network. This approach is conducted by several parameters; learning rate, initialized weights, network architecture, and so on. However, this paper would be focused on one of these parameters that is weights. The aim is to shed lights on the mutation weights through training network and its effects on the results. The experiment was done using backpropagation neural network with one hidden layer. The results reveal the role of mutation in escape from the local minima and making the change

  5. Architecture of Wireless Network

    Directory of Open Access Journals (Sweden)

    Ram Kumar Singh

    2012-03-01

    Full Text Available To allow for wireless communications among a specific geographic area, an base stations of communication network must be deployed to allow sufficient radio coverage to every mobile users. The base stations, successively, must be linked to a central hub called the MSC (mobile switching centre. The mobile switching centre allow connectivity among the PSTN (public switched telephone network and the numerous wireless base stations, and finally among entirely of the wireless subscribers in a system. The global telecommunications control grid of PSTN which associate with conventional (landline telephone switching centre (called central office with MSCs all around the world.

  6. Neural networks in astronomy.

    Science.gov (United States)

    Tagliaferri, Roberto; Longo, Giuseppe; Milano, Leopoldo; Acernese, Fausto; Barone, Fabrizio; Ciaramella, Angelo; De Rosa, Rosario; Donalek, Ciro; Eleuteri, Antonio; Raiconi, Giancarlo; Sessa, Salvatore; Staiano, Antonino; Volpicelli, Alfredo

    2003-01-01

    In the last decade, the use of neural networks (NN) and of other soft computing methods has begun to spread also in the astronomical community which, due to the required accuracy of the measurements, is usually reluctant to use automatic tools to perform even the most common tasks of data reduction and data mining. The federation of heterogeneous large astronomical databases which is foreseen in the framework of the astrophysical virtual observatory and national virtual observatory projects, is, however, posing unprecedented data mining and visualization problems which will find a rather natural and user friendly answer in artificial intelligence tools based on NNs, fuzzy sets or genetic algorithms. This review is aimed to both astronomers (who often have little knowledge of the methodological background) and computer scientists (who often know little about potentially interesting applications), and therefore will be structured as follows: after giving a short introduction to the subject, we shall summarize the methodological background and focus our attention on some of the most interesting fields of application, namely: object extraction and classification, time series analysis, noise identification, and data mining. Most of the original work described in the paper has been performed in the framework of the AstroNeural collaboration (Napoli-Salerno).

  7. Gait Recognition Based on Convolutional Neural Networks

    Science.gov (United States)

    Sokolova, A.; Konushin, A.

    2017-05-01

    In this work we investigate the problem of people recognition by their gait. For this task, we implement deep learning approach using the optical flow as the main source of motion information and combine neural feature extraction with the additional embedding of descriptors for representation improvement. In order to find the best heuristics, we compare several deep neural network architectures, learning and classification strategies. The experiments were made on two popular datasets for gait recognition, so we investigate their advantages and disadvantages and the transferability of considered methods.

  8. Logic Mining Using Neural Networks

    CERN Document Server

    Sathasivam, Saratha

    2008-01-01

    Knowledge could be gained from experts, specialists in the area of interest, or it can be gained by induction from sets of data. Automatic induction of knowledge from data sets, usually stored in large databases, is called data mining. Data mining methods are important in the management of complex systems. There are many technologies available to data mining practitioners, including Artificial Neural Networks, Regression, and Decision Trees. Neural networks have been successfully applied in wide range of supervised and unsupervised learning applications. Neural network methods are not commonly used for data mining tasks, because they often produce incomprehensible models, and require long training times. One way in which the collective properties of a neural network may be used to implement a computational task is by way of the concept of energy minimization. The Hopfield network is well-known example of such an approach. The Hopfield network is useful as content addressable memory or an analog computer for s...

  9. Neural Networks in Control Applications

    DEFF Research Database (Denmark)

    Sørensen, O.

    The intention of this report is to make a systematic examination of the possibilities of applying neural networks in those technical areas, which are familiar to a control engineer. In other words, the potential of neural networks in control applications is given higher priority than a detailed...... study of the networks themselves. With this end in view the following restrictions have been made: - Amongst numerous neural network structures, only the Multi Layer Perceptron (a feed-forward network) is applied. - Amongst numerous training algorithms, only four algorithms are examined, all...... in a recursive form (sample updating). The simplest is the Back Probagation Error Algorithm, and the most complex is the recursive Prediction Error Method using a Gauss-Newton search direction. - Over-fitting is often considered to be a serious problem when training neural networks. This problem is specifically...

  10. Learning sequential control in a Neural Blackboard Architecture for in situ concept reasoning

    NARCIS (Netherlands)

    Velde, van der Frank; Besold, Tarek R.; Lamb, Luis; Serafini, Luciano; Tabor, Whitney

    2016-01-01

    Simulations are presented and discussed of learning sequential control in a Neural Blackboard Architecture (NBA) for in situ concept-based reasoning. Sequential control is learned in a reservoir network, consisting of columns with neural circuits. This allows the reservoir to control the dynamics of

  11. Learning sequential control in a Neural Blackboard Architecture for in situ concept reasoning

    NARCIS (Netherlands)

    van der Velde, Frank; van der Velde, Frank; Besold, Tarek R.; Lamb, Luis; Serafini, Luciano; Tabor, Whitney

    2016-01-01

    Simulations are presented and discussed of learning sequential control in a Neural Blackboard Architecture (NBA) for in situ concept-based reasoning. Sequential control is learned in a reservoir network, consisting of columns with neural circuits. This allows the reservoir to control the dynamics of

  12. Optimization of Evolutionary Neural Networks Using Hybrid Learning Algorithms

    OpenAIRE

    Abraham, Ajith

    2004-01-01

    Evolutionary artificial neural networks (EANNs) refer to a special class of artificial neural networks (ANNs) in which evolution is another fundamental form of adaptation in addition to learning. Evolutionary algorithms are used to adapt the connection weights, network architecture and learning algorithms according to the problem environment. Even though evolutionary algorithms are well known as efficient global search algorithms, very often they miss the best local solutions in the complex s...

  13. The loading problem for recursive neural networks.

    Science.gov (United States)

    Gori, Marco; Sperduti, Alessandro

    2005-10-01

    The present work deals with one of the major and not yet completely understood topics of supervised connectionist models. Namely, it investigates the relationships between the difficulty of a given learning task and the chosen neural network architecture. These relationships have been investigated and nicely established for some interesting problems in the case of neural networks used for processing vectors and sequences, but only a few studies have dealt with loading problems involving graphical inputs. In this paper, we present sufficient conditions which guarantee the absence of local minima of the error function in the case of learning directed acyclic graphs with recursive neural networks. We introduce topological indices which can be directly calculated from the given training set and that allows us to design the neural architecture with local minima free error function. In particular, we conceive a reduction algorithm that involves both the information attached to the nodes and the topology, which enlarges significantly the class of the problems with unimodal error function previously proposed in the literature.

  14. Architecture and Robust Networks

    Science.gov (United States)

    2011-08-18

    enzyme complexity and amount. Enzyme amount affects the intermediate reaction rate k (x-axis), plotted against fragility (y-axis) for g=0 (red) and g...1 (blue). Either large k or large g is required to minimize fragility, but large k requires high metabolic overhead and large g requires high enzyme ...domain both to the circuit and physical level ([5]-[10]), and to cleaner integration of routing, scheduling, power control, and network coding ([11

  15. Dynamic Neural Fields as a Step Towards Cognitive Neuromorphic Architectures

    Directory of Open Access Journals (Sweden)

    Yulia eSandamirskaya

    2014-01-01

    Full Text Available Dynamic Field Theory (DFT is an established framework for modelling embodied cognition. In DFT, elementary cognitive functions such as memory formation, formation of grounded representations, attentional processes, decision making, adaptation, and learning emerge from neuronal dynamics. The basic computational element of this framework is a Dynamic Neural Field (DNF. Under constraints on the time-scale of the dynamics, the DNF is computationally equivalent to a soft winner-take-all (WTA network, which is considered one of the basic computational units in neuronal processing. Recently, it has been shown how a WTA network may be implemented in neuromorphic hardware, such as analogue Very Large Scale Integration (VLSI device. This paper leverages the relationship between DFT and soft WTA networks to systematically revise and integrate established DFT mechanisms that have previously been spread among different architectures. In addition, I also identify some novel computational and architectural mechanisms of DFT which may be implemented in neuromorphic VLSI devices using WTA networks as an intermediate computational layer. These specific mechanisms include the stabilization of working memory, the coupling of sensory systems to motor dynamics, intentionality, and autonomous learning. I further demonstrate how all these elements may be integrated into a unified architecture to generate behavior and autonomous learning.

  16. Artificial Neural Network Analysis System

    Science.gov (United States)

    2007-11-02

    Contract No. DASG60-00-M-0201 Purchase request no.: Foot in the Door-01 Title Name: Artificial Neural Network Analysis System Company: Atlantic... Artificial Neural Network Analysis System 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Powell, Bruce C 5d. PROJECT NUMBER 5e. TASK NUMBER...34) 27-02-2001 Report Type N/A Dates Covered (from... to) ("DD MON YYYY") 28-10-2000 27-02-2001 Title and Subtitle Artificial Neural Network Analysis

  17. Linearizing the Characteristics of Gas Sensors using Neural Network

    Directory of Open Access Journals (Sweden)

    Gowri shankari B

    2015-03-01

    Full Text Available The paper describes implementing arbitrary connected neural network with more powerful network architecture to be embedded in inexpensive microcontroller. Our objective is to extend linear region of operation of nonlinear sensors. In order to implement more powerful neural network architectures on microcontrollers, the special Neuron by Neuron computing routine was developed in assembly language to allow fastest and shortest code. Embedded neural network requires hyperbolic tangent with great precision was used as a neuron activation function. Implementing neural network in microcontroller makes superior to other systems in faster response, smaller errors, and smoother surfaces. But its efficient implementation on microcontroller with simplified arithmetic was another challenge. This process was then demonstrated on gas sensor problem as they were mainly used accurately in measuring gas leakage in industry.

  18. Modular, Hierarchical Learning By Artificial Neural Networks

    Science.gov (United States)

    Baldi, Pierre F.; Toomarian, Nikzad

    1996-01-01

    Modular and hierarchical approach to supervised learning by artificial neural networks leads to neural networks more structured than neural networks in which all neurons fully interconnected. These networks utilize general feedforward flow of information and sparse recurrent connections to achieve dynamical effects. The modular organization, sparsity of modular units and connections, and fact that learning is much more circumscribed are all attractive features for designing neural-network hardware. Learning streamlined by imitating some aspects of biological neural networks.

  19. Functional expansion representations of artificial neural networks

    Science.gov (United States)

    Gray, W. Steven

    1992-01-01

    In the past few years, significant interest has developed in using artificial neural networks to model and control nonlinear dynamical systems. While there exists many proposed schemes for accomplishing this and a wealth of supporting empirical results, most approaches to date tend to be ad hoc in nature and rely mainly on heuristic justifications. The purpose of this project was to further develop some analytical tools for representing nonlinear discrete-time input-output systems, which when applied to neural networks would give insight on architecture selection, pruning strategies, and learning algorithms. A long term goal is to determine in what sense, if any, a neural network can be used as a universal approximator for nonliner input-output maps with memory (i.e., realized by a dynamical system). This property is well known for the case of static or memoryless input-output maps. The general architecture under consideration in this project was a single-input, single-output recurrent feedforward network.

  20. Neural networks and statistical learning

    CERN Document Server

    Du, Ke-Lin

    2014-01-01

    Providing a broad but in-depth introduction to neural network and machine learning in a statistical framework, this book provides a single, comprehensive resource for study and further research. All the major popular neural network models and statistical learning approaches are covered with examples and exercises in every chapter to develop a practical working understanding of the content. Each of the twenty-five chapters includes state-of-the-art descriptions and important research results on the respective topics. The broad coverage includes the multilayer perceptron, the Hopfield network, associative memory models, clustering models and algorithms, the radial basis function network, recurrent neural networks, principal component analysis, nonnegative matrix factorization, independent component analysis, discriminant analysis, support vector machines, kernel methods, reinforcement learning, probabilistic and Bayesian networks, data fusion and ensemble learning, fuzzy sets and logic, neurofuzzy models, hardw...

  1. Neural Networks in Control Applications

    DEFF Research Database (Denmark)

    Sørensen, O.

    examined, and it appears that considering 'normal' neural network models with, say, 500 samples, the problem of over-fitting is neglible, and therefore it is not taken into consideration afterwards. Numerous model types, often met in control applications, are implemented as neural network models....... - Control concepts including parameter estimation - Control concepts including inverse modelling - Control concepts including optimal control For each of the three groups, different control concepts and specific training methods are detailed described.Further, all control concepts are tested on the same......The intention of this report is to make a systematic examination of the possibilities of applying neural networks in those technical areas, which are familiar to a control engineer. In other words, the potential of neural networks in control applications is given higher priority than a detailed...

  2. The holographic neural network: Performance comparison with other neural networks

    Science.gov (United States)

    Klepko, Robert

    1991-10-01

    The artificial neural network shows promise for use in recognition of high resolution radar images of ships. The holographic neural network (HNN) promises a very large data storage capacity and excellent generalization capability, both of which can be achieved with only a few learning trials, unlike most neural networks which require on the order of thousands of learning trials. The HNN is specially designed for pattern association storage, and mathematically realizes the storage and retrieval mechanisms of holograms. The pattern recognition capability of the HNN was studied, and its performance was compared with five other commonly used neural networks: the Adaline, Hamming, bidirectional associative memory, recirculation, and back propagation networks. The patterns used for testing represented artificial high resolution radar images of ships, and appear as a two dimensional topology of peaks with various amplitudes. The performance comparisons showed that the HNN does not perform as well as the other neural networks when using the same test data. However, modification of the data to make it appear more Gaussian distributed, improved the performance of the network. The HNN performs best if the data is completely Gaussian distributed.

  3. Neural Network Communications Signal Processing

    Science.gov (United States)

    1994-08-01

    Technical Information Report for the Neural Network Communications Signal Processing Program, CDRL A003, 31 March 1993. Software Development Plan for...track changing jamming conditions to provide the decoder with the best log- likelihood ratio metrics at a given time. As part of our development plan we...Artificial Neural Networks (ICANN-91) Volume 2, June 24-28, 1991, pp. 1677-1680. Kohonen, Teuvo, Raivio, Kimmo, Simula, Oli, Venta , 011i, Henriksson

  4. What are artificial neural networks?

    DEFF Research Database (Denmark)

    Krogh, Anders

    2008-01-01

    Artificial neural networks have been applied to problems ranging from speech recognition to prediction of protein secondary structure, classification of cancers and gene prediction. How do they work and what might they be good for? Udgivelsesdato: 2008-Feb......Artificial neural networks have been applied to problems ranging from speech recognition to prediction of protein secondary structure, classification of cancers and gene prediction. How do they work and what might they be good for? Udgivelsesdato: 2008-Feb...

  5. On the Efficiency of Recurrent Neural Network Optimization Algorithms

    OpenAIRE

    Krause, Ben; Lu, Liang; Murray, Iain; Renals, Steve

    2015-01-01

    This study compares the sequential and parallel efficiency of training Recurrent Neural Networks (RNNs) with Hessian-free optimization versus a gradient descent variant. Experiments are performed using the long short term memory (LSTM)architecture and the newly proposed multiplicative LSTM (mLSTM) architecture.Results demonstrate a number of insights into these architectures and optimizationalgorithms, including that Hessian-free optimization has the potential for largeefficiency gains in a h...

  6. Sensor Network Architectures for Monitoring Underwater Pipelines

    OpenAIRE

    Imad Jawhar; Jameela Al-Jaroodi; Nader Mohamed; Liren Zhang

    2011-01-01

    This paper develops and compares different sensor network architecture designs that can be used for monitoring underwater pipeline infrastructures. These architectures are underwater wired sensor networks, underwater acoustic wireless sensor networks, RF (Radio Frequency) wireless sensor networks, integrated wired/acoustic wireless sensor networks, and integrated wired/RF wireless sensor networks. The paper also discusses the reliability challenges and enhancement approaches for these network...

  7. VLSI implementation of neural networks.

    Science.gov (United States)

    Wilamowski, B M; Binfet, J; Kaynak, M O

    2000-06-01

    Currently, fuzzy controllers are the most popular choice for hardware implementation of complex control surfaces because they are easy to design. Neural controllers are more complex and hard to train, but provide an outstanding control surface with much less error than that of a fuzzy controller. There are also some problems that have to be solved before the networks can be implemented on VLSI chips. First, an approximation function needs to be developed because CMOS neural networks have an activation function different than any function used in neural network software. Next, this function has to be used to train the network. Finally, the last problem for VLSI designers is the quantization effect caused by discrete values of the channel length (L) and width (W) of MOS transistor geometries. Two neural networks were designed in 1.5 microm technology. Using adequate approximation functions solved the problem of activation function. With this approach, trained networks were characterized by very small errors. Unfortunately, when the weights were quantized, errors were increased by an order of magnitude. However, even though the errors were enlarged, the results obtained from neural network hardware implementations were superior to the results obtained with fuzzy system approach.

  8. Complex-Valued Neural Networks

    CERN Document Server

    Hirose, Akira

    2012-01-01

    This book is the second enlarged and revised edition of the first successful monograph on complex-valued neural networks (CVNNs) published in 2006, which lends itself to graduate and undergraduate courses in electrical engineering, informatics, control engineering, mechanics, robotics, bioengineering, and other relevant fields. In the second edition the recent trends in CVNNs research are included, resulting in e.g. almost a doubled number of references. The parametron invented in 1954 is also referred to with discussion on analogy and disparity. Also various additional arguments on the advantages of the complex-valued neural networks enhancing the difference to real-valued neural networks are given in various sections. The book is useful for those beginning their studies, for instance, in adaptive signal processing for highly functional sensing and imaging, control in unknown and changing environment, robotics inspired by human neural systems, and brain-like information processing, as well as interdisciplina...

  9. Multi-column Deep Neural Networks for Image Classification

    OpenAIRE

    Cireşan, Dan; Meier, Ueli; Schmidhuber, Juergen

    2012-01-01

    Traditional methods of computer vision and machine learning cannot match human performance on tasks such as the recognition of handwritten digits or traffic signs. Our biologically plausible deep artificial neural network architectures can. Small (often minimal) receptive fields of convolutional winner-take-all neurons yield large network depth, resulting in roughly as many sparsely connected neural layers as found in mammals between retina and visual cortex. Only winner neurons are trained. ...

  10. Advances in Artificial Neural Networks – Methodological Development and Application

    Directory of Open Access Journals (Sweden)

    Yanbo Huang

    2009-08-01

    Full Text Available Artificial neural networks as a major soft-computing technology have been extensively studied and applied during the last three decades. Research on backpropagation training algorithms for multilayer perceptron networks has spurred development of other neural network training algorithms for other networks such as radial basis function, recurrent network, feedback network, and unsupervised Kohonen self-organizing network. These networks, especially the multilayer perceptron network with a backpropagation training algorithm, have gained recognition in research and applications in various scientific and engineering areas. In order to accelerate the training process and overcome data over-fitting, research has been conducted to improve the backpropagation algorithm. Further, artificial neural networks have been integrated with other advanced methods such as fuzzy logic and wavelet analysis, to enhance the ability of data interpretation and modeling and to avoid subjectivity in the operation of the training algorithm. In recent years, support vector machines have emerged as a set of high-performance supervised generalized linear classifiers in parallel with artificial neural networks. A review on development history of artificial neural networks is presented and the standard architectures and algorithms of artificial neural networks are described. Furthermore, advanced artificial neural networks will be introduced with support vector machines, and limitations of ANNs will be identified. The future of artificial neural network development in tandem with support vector machines will be discussed in conjunction with further applications to food science and engineering, soil and water relationship for crop management, and decision support for precision agriculture. Along with the network structures and training algorithms, the applications of artificial neural networks will be reviewed as well, especially in the fields of agricultural and biological

  11. Antenna analysis using neural networks

    Science.gov (United States)

    Smith, William T.

    1992-01-01

    Conventional computing schemes have long been used to analyze problems in electromagnetics (EM). The vast majority of EM applications require computationally intensive algorithms involving numerical integration and solutions to large systems of equations. The feasibility of using neural network computing algorithms for antenna analysis is investigated. The ultimate goal is to use a trained neural network algorithm to reduce the computational demands of existing reflector surface error compensation techniques. Neural networks are computational algorithms based on neurobiological systems. Neural nets consist of massively parallel interconnected nonlinear computational elements. They are often employed in pattern recognition and image processing problems. Recently, neural network analysis has been applied in the electromagnetics area for the design of frequency selective surfaces and beam forming networks. The backpropagation training algorithm was employed to simulate classical antenna array synthesis techniques. The Woodward-Lawson (W-L) and Dolph-Chebyshev (D-C) array pattern synthesis techniques were used to train the neural network. The inputs to the network were samples of the desired synthesis pattern. The outputs are the array element excitations required to synthesize the desired pattern. Once trained, the network is used to simulate the W-L or D-C techniques. Various sector patterns and cosecant-type patterns (27 total) generated using W-L synthesis were used to train the network. Desired pattern samples were then fed to the neural network. The outputs of the network were the simulated W-L excitations. A 20 element linear array was used. There were 41 input pattern samples with 40 output excitations (20 real parts, 20 imaginary). A comparison between the simulated and actual W-L techniques is shown for a triangular-shaped pattern. Dolph-Chebyshev is a different class of synthesis technique in that D-C is used for side lobe control as opposed to pattern

  12. Fractional Hopfield Neural Networks: Fractional Dynamic Associative Recurrent Neural Networks.

    Science.gov (United States)

    Pu, Yi-Fei; Yi, Zhang; Zhou, Ji-Liu

    2016-07-14

    This paper mainly discusses a novel conceptual framework: fractional Hopfield neural networks (FHNN). As is commonly known, fractional calculus has been incorporated into artificial neural networks, mainly because of its long-term memory and nonlocality. Some researchers have made interesting attempts at fractional neural networks and gained competitive advantages over integer-order neural networks. Therefore, it is naturally makes one ponder how to generalize the first-order Hopfield neural networks to the fractional-order ones, and how to implement FHNN by means of fractional calculus. We propose to introduce a novel mathematical method: fractional calculus to implement FHNN. First, we implement fractor in the form of an analog circuit. Second, we implement FHNN by utilizing fractor and the fractional steepest descent approach, construct its Lyapunov function, and further analyze its attractors. Third, we perform experiments to analyze the stability and convergence of FHNN, and further discuss its applications to the defense against chip cloning attacks for anticounterfeiting. The main contribution of our work is to propose FHNN in the form of an analog circuit by utilizing a fractor and the fractional steepest descent approach, construct its Lyapunov function, prove its Lyapunov stability, analyze its attractors, and apply FHNN to the defense against chip cloning attacks for anticounterfeiting. A significant advantage of FHNN is that its attractors essentially relate to the neuron's fractional order. FHNN possesses the fractional-order-stability and fractional-order-sensitivity characteristics.

  13. Sensor network architectures for monitoring underwater pipelines.

    Science.gov (United States)

    Mohamed, Nader; Jawhar, Imad; Al-Jaroodi, Jameela; Zhang, Liren

    2011-01-01

    This paper develops and compares different sensor network architecture designs that can be used for monitoring underwater pipeline infrastructures. These architectures are underwater wired sensor networks, underwater acoustic wireless sensor networks, RF (radio frequency) wireless sensor networks, integrated wired/acoustic wireless sensor networks, and integrated wired/RF wireless sensor networks. The paper also discusses the reliability challenges and enhancement approaches for these network architectures. The reliability evaluation, characteristics, advantages, and disadvantages among these architectures are discussed and compared. Three reliability factors are used for the discussion and comparison: the network connectivity, the continuity of power supply for the network, and the physical network security. In addition, the paper also develops and evaluates a hierarchical sensor network framework for underwater pipeline monitoring.

  14. Sensor Network Architectures for Monitoring Underwater Pipelines

    Directory of Open Access Journals (Sweden)

    Imad Jawhar

    2011-11-01

    Full Text Available This paper develops and compares different sensor network architecture designs that can be used for monitoring underwater pipeline infrastructures. These architectures are underwater wired sensor networks, underwater acoustic wireless sensor networks, RF (Radio Frequency wireless sensor networks, integrated wired/acoustic wireless sensor networks, and integrated wired/RF wireless sensor networks. The paper also discusses the reliability challenges and enhancement approaches for these network architectures. The reliability evaluation, characteristics, advantages, and disadvantages among these architectures are discussed and compared. Three reliability factors are used for the discussion and comparison: the network connectivity, the continuity of power supply for the network, and the physical network security. In addition, the paper also develops and evaluates a hierarchical sensor network framework for underwater pipeline monitoring.

  15. Perspective: network-guided pattern formation of neural dynamics

    OpenAIRE

    Hütt, Marc-Thorsten; Kaiser, Marcus; Claus C Hilgetag

    2014-01-01

    The understanding of neural activity patterns is fundamentally linked to an understanding of how the brain's network architecture shapes dynamical processes. Established approaches rely mostly on deviations of a given network from certain classes of random graphs. Hypotheses about the supposed role of prominent topological features (for instance, the roles of modularity, network motifs or hierarchical network organization) are derived from these deviations. An alternative strategy could be to...

  16. Data Architecture for Sensor Network

    Directory of Open Access Journals (Sweden)

    Jan Ježek

    2012-03-01

    Full Text Available Fast development of hardware in recent years leads to the high availability of simple sensing devices at minimal cost. As a consequence, there is many of sensor networks nowadays. These networks can continuously produce a large amount of observed data including the location of measurement. Optimal data architecture for such propose is a challenging issue due to its large scale and spatio-temporal nature.  The aim of this paper is to describe data architecture that was used in a particular solution for storage of sensor data. This solution is based on relation data model – concretely PostgreSQL and PostGIS. We will mention out experience from real world projects focused on car monitoring and project targeted on agriculture sensor networks. We will also shortly demonstrate the possibilities of client side API and the potential of other open source libraries that can be used for cartographic visualization (e.g. GeoServer. The main objective is to describe the strength and weakness of usage of relation database system for such propose and to introduce also alternative approaches based on NoSQL concept.

  17. An information theoretic approach for combining neural network process models.

    Science.gov (United States)

    Sridhar, D V.; Bartlett, E B.; Seagrave, R C.

    1999-07-01

    Typically neural network modelers in chemical engineering focus on identifying and using a single, hopefully optimal, neural network model. Using a single optimal model implicitly assumes that one neural network model can extract all the information available in a given data set and that the other candidate models are redundant. In general, there is no assurance that any individual model has extracted all relevant information from the data set. Recently, Wolpert (Neural Networks, 5(2), 241 (1992)) proposed the idea of stacked generalization to combine multiple models. Sridhar, Seagrave and Barlett (AIChE J., 42, 2529 (1996)) implemented the stacked generalization for neural network models by integrating multiple neural networks into an architecture known as stacked neural networks (SNNs). SNNs consist of a combination of the candidate neural networks and were shown to provide improved modeling of chemical processes. However, in Sridhar's work SNNs were limited to using a linear combination of artificial neural networks. While a linear combination is simple and easy to use, it can utilize only those model outputs that have a high linear correlation to the output. Models that are useful in a nonlinear sense are wasted if a linear combination is used. In this work we propose an information theoretic stacking (ITS) algorithm for combining neural network models. The ITS algorithm identifies and combines useful models regardless of the nature of their relationship to the actual output. The power of the ITS algorithm is demonstrated through three examples including application to a dynamic process modeling problem. The results obtained demonstrate that the SNNs developed using the ITS algorithm can achieve highly improved performance as compared to selecting and using a single hopefully optimal network or using SNNs based on a linear combination of neural networks.

  18. The architectural design of networks of protein domain architectures.

    Science.gov (United States)

    Hsu, Chia-Hsin; Chen, Chien-Kuo; Hwang, Ming-Jing

    2013-08-23

    Protein domain architectures (PDAs), in which single domains are linked to form multiple-domain proteins, are a major molecular form used by evolution for the diversification of protein functions. However, the design principles of PDAs remain largely uninvestigated. In this study, we constructed networks to connect domain architectures that had grown out from the same single domain for every single domain in the Pfam-A database and found that there are three main distinctive types of these networks, which suggests that evolution can exploit PDAs in three different ways. Further analysis showed that these three different types of PDA networks are each adopted by different types of protein domains, although many networks exhibit the characteristics of more than one of the three types. Our results shed light on nature's blueprint for protein architecture and provide a framework for understanding architectural design from a network perspective.

  19. Multigradient for Neural Networks for Equalizers

    Directory of Open Access Journals (Sweden)

    Chulhee Lee

    2003-06-01

    Full Text Available Recently, a new training algorithm, multigradient, has been published for neural networks and it is reported that the multigradient outperforms the backpropagation when neural networks are used as a classifier. When neural networks are used as an equalizer in communications, they can be viewed as a classifier. In this paper, we apply the multigradient algorithm to train the neural networks that are used as equalizers. Experiments show that the neural networks trained using the multigradient noticeably outperforms the neural networks trained by the backpropagation.

  20. FPGA implementation of a pyramidal Weightless Neural Networks learning system.

    Science.gov (United States)

    Al-Alawi, Raida

    2003-08-01

    A hardware architecture of a Probabilistic Logic Neuron (PLN) is presented. The suggested model facilitates the on-chip learning of pyramidal Weightless Neural Networks using a modified probabilistic search reward/penalty training algorithm. The penalization strategy of the training algorithm depends on a predefined parameter called the probabilistic search interval. A complete Weightless Neural Network (WNN) learning system is modeled and implemented on Xilinx XC4005E Field Programmable Gate Array (FPGA), allowing its architecture to be configurable. Various experiments have been conducted to examine the feasibility and performance of the WNN learning system. Results show that the system has a fast convergence rate and good generalization ability.

  1. Relations Between Wavelet Network and Feedforward Neural Network

    Institute of Scientific and Technical Information of China (English)

    刘志刚; 何正友; 钱清泉

    2002-01-01

    A comparison of construction forms and base functions is made between feedforward neural network and wavelet network. The relations between them are studied from the constructions of wavelet functions or dilation functions in wavelet network by different activation functions in feedforward neural network. It is concluded that some wavelet function is equal to the linear combination of several neurons in feedforward neural network.

  2. Ocean wave forecasting using recurrent neural networks

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Prabaharan, N.

    , merchant vessel routing, nearshore construction, etc. more efficiently and safely. This paper describes an artificial neural network, namely recurrent neural network with rprop update algorithm and is applied for wave forecasting. Measured ocean waves off...

  3. Improved transformer protection using probabilistic neural network ...

    African Journals Online (AJOL)

    user

    This article presents a novel technique to distinguish between magnetizing inrush ... Protective relaying, Probabilistic neural network, Active power relays, Power ... Forward Neural Network (MFFNN) with back-propagation learning technique.

  4. Security Shift in Future Network Architectures

    NARCIS (Netherlands)

    Hartog, T.; Schotanus, H.A.; Verkoelen, C.A.A.

    2010-01-01

    In current practice military communication infrastructures are deployed as stand-alone networked information systems. Network-Enabled Capabilities (NEC) and combined military operations lead to new requirements which current communication architectures cannot deliver. This paper informs IT architect

  5. Neural blackboard architectures of combinatorial structures in cognition.

    Science.gov (United States)

    van der Velde, Frank; de Kamps, Marc

    2006-02-01

    Human cognition is unique in the way in which it relies on combinatorial (or compositional) structures. Language provides ample evidence for the existence of combinatorial structures, but they can also be found in visual cognition. To understand the neural basis of human cognition, it is therefore essential to understand how combinatorial structures can be instantiated in neural terms. In his recent book on the foundations of language, Jackendoff described four fundamental problems for a neural instantiation of combinatorial structures: the massiveness of the binding problem, the problem of 2, the problem of variables, and the transformation of combinatorial structures from working memory to long-term memory. This paper aims to show that these problems can be solved by means of neural "blackboard" architectures. For this purpose, a neural blackboard architecture for sentence structure is presented. In this architecture, neural structures that encode for words are temporarily bound in a manner that preserves the structure of the sentence. It is shown that the architecture solves the four problems presented by Jackendoff. The ability of the architecture to instantiate sentence structures is illustrated with examples of sentence complexity observed in human language performance. Similarities exist between the architecture for sentence structure and blackboard architectures for combinatorial structures in visual cognition, derived from the structure of the visual cortex. These architectures are briefly discussed, together with an example of a combinatorial structure in which the blackboard architectures for language and vision are combined. In this way, the architecture for language is grounded in perception. Perspectives and potential developments of the architectures are discussed.

  6. Neural Network for Sparse Reconstruction

    Directory of Open Access Journals (Sweden)

    Qingfa Li

    2014-01-01

    Full Text Available We construct a neural network based on smoothing approximation techniques and projected gradient method to solve a kind of sparse reconstruction problems. Neural network can be implemented by circuits and can be seen as an important method for solving optimization problems, especially large scale problems. Smoothing approximation is an efficient technique for solving nonsmooth optimization problems. We combine these two techniques to overcome the difficulties of the choices of the step size in discrete algorithms and the item in the set-valued map of differential inclusion. In theory, the proposed network can converge to the optimal solution set of the given problem. Furthermore, some numerical experiments show the effectiveness of the proposed network in this paper.

  7. Performance of artificial neural networks and genetical evolved artificial neural networks unfolding techniques

    Energy Technology Data Exchange (ETDEWEB)

    Ortiz R, J. M. [Escuela Politecnica Superior, Departamento de Electrotecnia y Electronica, Avda. Menendez Pidal s/n, Cordoba (Spain); Martinez B, M. R.; Vega C, H. R. [Universidad Autonoma de Zacatecas, Unidad Academica de Estudios Nucleares, Calle Cipres No. 10, Fracc. La Penuela, 98068 Zacatecas (Mexico); Gallego D, E.; Lorente F, A. [Universidad Politecnica de Madrid, Departamento de Ingenieria Nuclear, ETSI Industriales, C. Jose Gutierrez Abascal 2, 28006 Madrid (Spain); Mendez V, R.; Los Arcos M, J. M.; Guerrero A, J. E., E-mail: morvymm@yahoo.com.m [CIEMAT, Laboratorio de Metrologia de Radiaciones Ionizantes, Avda. Complutense 22, 28040 Madrid (Spain)

    2011-02-15

    With the Bonner spheres spectrometer neutron spectrum is obtained through an unfolding procedure. Monte Carlo methods, Regularization, Parametrization, Least-squares, and Maximum Entropy are some of the techniques utilized for unfolding. In the last decade methods based on Artificial Intelligence Technology have been used. Approaches based on Genetic Algorithms and Artificial Neural Networks (Ann) have been developed in order to overcome the drawbacks of previous techniques. Nevertheless the advantages of Ann still it has some drawbacks mainly in the design process of the network, vg the optimum selection of the architectural and learning Ann parameters. In recent years the use of hybrid technologies, combining Ann and genetic algorithms, has been utilized to. In this work, several Ann topologies were trained and tested using Ann and Genetically Evolved Artificial Neural Networks in the aim to unfold neutron spectra using the count rates of a Bonner sphere spectrometer. Here, a comparative study of both procedures has been carried out. (Author)

  8. The next generation of neural network chips

    Energy Technology Data Exchange (ETDEWEB)

    Beiu, V.

    1997-08-01

    There have been many national and international neural networks research initiatives: USA (DARPA, NIBS), Canada (IRIS), Japan (HFSP) and Europe (BRAIN, GALA TEA, NERVES, ELENE NERVES 2) -- just to mention a few. Recent developments in the field of neural networks, cognitive science, bioengineering and electrical engineering have made it possible to understand more about the functioning of large ensembles of identical processing elements. There are more research papers than ever proposing solutions and hardware implementations are by no means an exception. Two fields (computing and neuroscience) are interacting in ways nobody could imagine just several years ago, and -- with the advent of new technologies -- researchers are focusing on trying to copy the Brain. Such an exciting confluence may quite shortly lead to revolutionary new computers and it is the aim of this invited session to bring to light some of the challenging research aspects dealing with the hardware realizability of future intelligent chips. Present-day (conventional) technology is (still) mostly digital and, thus, occupies wider areas and consumes much more power than the solutions envisaged. The innovative algorithmic and architectural ideals should represent important breakthroughs, paving the way towards making neural network chips available to the industry at competitive prices, in relatively small packages and consuming a fraction of the power required by equivalent digital solutions.

  9. 可变神经网络结构下的遥感影像光谱分解方法%Spectral Unmixing Method of Remote Sensing Images in Variable Architecture of Neural Network

    Institute of Scientific and Technical Information of China (English)

    李熙; 石长民; 李畅; 陈锋锐; 田礼乔

    2012-01-01

    Spectral unmixing of remote sensing images is a hotspot in remote sensing field, and Multilayer Perception(MLP) neural network is a common nonlinear spectral unmixing algorithm. However, currently there is no effective way to deal with the negative abundances derived by the network. To solve this problem, a MLP neural network with variable architecture is proposed. By discarding endmembers with negative abundances, the MLP architecture is modified to unmix the rest endmembers, so a remote sensing image is finally unmixed. An experiment using a simulated TM image shows that the average errors of the proposed method, conventional MLP method and linear spectral unmixing model are 0.077 7, 0.081 9 and 0.094 3 respectively, thus the proposed method outperforms the other two. Therefore, the proposed method can overcome the negative abundance problem effectively.%多层感知神经网络(MLP)是主流的非线性分解方法,但是目前缺乏有效方法处理MLP分解结果中的丰度负值问题.为此,提出一种可变神经网络结构的方法,逐步去除负值丰度对应的端元,并调整相应的网络结构使之针对剩余的端元进行分解.通过武汉地区模拟TM遥感影像实验可以发现,该方法与传统MLP方法以及线性光谱分解方法的平均误差分别为0.077 7、0.081 9、0.094 3,说明该方法的分解精度高于其他2种分解方法,能克服丰度负值问题.

  10. The NASA Space Communications Data Networking Architecture

    Science.gov (United States)

    Israel, David J.; Hooke, Adrian J.; Freeman, Kenneth; Rush, John J.

    2006-01-01

    The NASA Space Communications Architecture Working Group (SCAWG) has recently been developing an integrated agency-wide space communications architecture in order to provide the necessary communication and navigation capabilities to support NASA's new Exploration and Science Programs. A critical element of the space communications architecture is the end-to-end Data Networking Architecture, which must provide a wide range of services required for missions ranging from planetary rovers to human spaceflight, and from sub-orbital space to deep space. Requirements for a higher degree of user autonomy and interoperability between a variety of elements must be accommodated within an architecture that necessarily features minimum operational complexity. The architecture must also be scalable and evolvable to meet mission needs for the next 25 years. This paper will describe the recommended NASA Data Networking Architecture, present some of the rationale for the recommendations, and will illustrate an application of the architecture to example NASA missions.

  11. Neural Network Approach to Railway Stand Lateral SKEW Control

    Directory of Open Access Journals (Sweden)

    Peter Mark Benes

    2014-02-01

    Full Text Available The paper presents a study of an adaptive approach to lateral skew co ntrol for an experimental railway stand. The preliminary experiments with the real ex perimental railway stand and simulations with its 3-D mechanical model, indicates difficulties of model-based control of the device. Thus, use of neural networks for identification and c ontrol of lateral skew shall be investigated. This paper focuses on real-data based modelling of the railway stand by various neural network models, i.e; linear neural unit and quadratic neur al unit architectures. Furthermore, training methods of these neural architecture s as such, real-time-recurrent- learning and a variation of back-propagation-through-time are exam ined, accompanied by a discussion of the produced experimental results

  12. Move Ordering using Neural Networks

    NARCIS (Netherlands)

    Kocsis, L.; Uiterwijk, J.; Van Den Herik, J.

    2001-01-01

    © Springer-Verlag Berlin Heidelberg 2001. The efficiency of alpha-beta search algorithms heavily depends on the order in which the moves are examined. This paper focuses on using neural networks to estimate the likelihood of a move being the best in a certain position. The moves considered more like

  13. Neural Network based Consumption Forecasting

    DEFF Research Database (Denmark)

    Madsen, Per Printz

    2016-01-01

    This paper describe a Neural Network based method for consumption forecasting. This work has been financed by the The ENCOURAGE project. The aims of The ENCOURAGE project is to develop embedded intelligence and integration technologies that will directly optimize energy use in buildings and enable...

  14. Spin glasses and neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Parga, N. (Comision Nacional de Energia Atomica, San Carlos de Bariloche (Argentina). Centro Atomico Bariloche; Universidad Nacional de Cuyo, San Carlos de Bariloche (Argentina). Inst. Balseiro)

    1989-07-01

    The mean-field theory of spin glass models has been used as a prototype of systems with frustration and disorder. One of the most interesting related systems are models of associative memories. In these lectures we review the main concepts developed to solve the Sherrington-Kirkpatrick model and its application to neural networks. (orig.).

  15. Artificial neural networks in medicine

    Energy Technology Data Exchange (ETDEWEB)

    Keller, P.E.

    1994-07-01

    This Technology Brief provides an overview of artificial neural networks (ANN). A definition and explanation of an ANN is given and situations in which an ANN is used are described. ANN applications to medicine specifically are then explored and the areas in which it is currently being used are discussed. Included are medical diagnostic aides, biochemical analysis, medical image analysis and drug development.

  16. Competition Based Neural Networks for Assignment Problems

    Institute of Scientific and Technical Information of China (English)

    李涛; LuyuanFang

    1991-01-01

    Competition based neural networks have been used to solve the generalized assignment problem and the quadratic assignment problem.Both problems are very difficult and are ε approximation complete.The neural network approach has yielded highly competitive performance and good performance for the quadratic assignment problem.These neural networks are guaranteed to produce feasible solutions.

  17. Analysis of neural networks through base functions

    NARCIS (Netherlands)

    van der Zwaag, B.J.; Slump, Cornelis H.; Spaanenburg, L.

    Problem statement. Despite their success-story, neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a "magic tool" but possibly even more

  18. Simplified LQG Control with Neural Networks

    DEFF Research Database (Denmark)

    Sørensen, O.

    1997-01-01

    A new neural network application for non-linear state control is described. One neural network is modelled to form a Kalmann predictor and trained to act as an optimal state observer for a non-linear process. Another neural network is modelled to form a state controller and trained to produce...

  19. Analysis of Neural Networks through Base Functions

    NARCIS (Netherlands)

    Zwaag, van der B.J.; Slump, C.H.; Spaanenburg, L.

    2002-01-01

    Problem statement. Despite their success-story, neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a "magic tool" but possibly even more

  20. A node architecture for disaster relief networking

    NARCIS (Netherlands)

    Hoeksema, F.W.; Heskamp, M.; Schiphorst, R.; Slump, C.H.

    2005-01-01

    In this paper we present node architecture for a personal node in a cognitive ad-hoc disaster relief network. This architecture is motivated from the network system requirements, especially single-hop distance and jamming-resilience requirements. It is shown that the power consumption of current-day

  1. Dynamic neural architecture for social knowledge retrieval.

    Science.gov (United States)

    Wang, Yin; Collins, Jessica A; Koski, Jessica; Nugiel, Tehila; Metoki, Athanasia; Olson, Ingrid R

    2017-04-18

    Social behavior is often shaped by the rich storehouse of biographical information that we hold for other people. In our daily life, we rapidly and flexibly retrieve a host of biographical details about individuals in our social network, which often guide our decisions as we navigate complex social interactions. Even abstract traits associated with an individual, such as their political affiliation, can cue a rich cascade of person-specific knowledge. Here, we asked whether the anterior temporal lobe (ATL) serves as a hub for a distributed neural circuit that represents person knowledge. Fifty participants across two studies learned biographical information about fictitious people in a 2-d training paradigm. On day 3, they retrieved this biographical information while undergoing an fMRI scan. A series of multivariate and connectivity analyses suggest that the ATL stores abstract person identity representations. Moreover, this region coordinates interactions with a distributed network to support the flexible retrieval of person attributes. Together, our results suggest that the ATL is a central hub for representing and retrieving person knowledge.

  2. Performance Analysis of Software Effort Estimation Models Using Neural Networks

    Directory of Open Access Journals (Sweden)

    P.Latha

    2013-08-01

    Full Text Available Software Effort estimation involves the estimation of effort required to develop software. Cost overrun, schedule overrun occur in the software development due to the wrong estimate made during the initial stage of software development. Proper estimation is very essential for successful completion of software development. Lot of estimation techniques available to estimate the effort in which neural network based estimation technique play a prominent role. Back propagation Network is the most widely used architecture. ELMAN neural network a recurrent type network can be used on par with Back propagation Network. For a good predictor system the difference between estimated effort and actual effort should be as low as possible. Data from historic project of NASA is used for training and testing. The experimental Results confirm that Back propagation algorithm is efficient than Elman neural network.

  3. Morphological Classification of Galaxies Using Artificial Neural Networks

    CERN Document Server

    Ball, N M

    2001-01-01

    The results of morphological galaxy classifications performed by humans and by automated methods are compared. In particular, a comparison is made between the eyeball classifications of 454 galaxies in the Sloan Digital Sky Survey (SDSS) commissioning data (Shimasaku et al. 2001) with those of supervised artificial neural network programs constructed using the MATLAB Neural Network Toolbox package. Networks in this package have not previously been used for galaxy classification. It is found that simple neural networks are able to improve on the results of linear classifiers, giving correlation coefficients of the order of 0.8 +/- 0.1, compared with those of around 0.7 +/- 0.1 for linear classifiers. The networks are trained using the resilient backpropagation algorithm, which, to the author's knowledge, has not been specifically used in the galaxy classification literature. The galaxy parameters used and the network architecture are both important, and in particular the galaxy concentration index, a measure o...

  4. Neural networks for harmonic structure in music perception and action

    OpenAIRE

    Bianco, R.; Novembre, G.; Keller, P. E.; Kim, S G; Scharf, F; Friederici, A.D.; Villringer, A; Sammler, D.

    2016-01-01

    The ability to predict upcoming structured events based on long-term knowledge and contextual priors is a fundamental principle of human cognition. Tonal music triggers predictive processes based on structural properties of harmony, i.e., regularities defining the arrangement of chords into well-formed musical sequences. While the neural architecture of structure-based predictions during music perception is well described, little is known about the neural networks for analogous predictions in...

  5. Supervised Learning with Complex-valued Neural Networks

    CERN Document Server

    Suresh, Sundaram; Savitha, Ramasamy

    2013-01-01

    Recent advancements in the field of telecommunications, medical imaging and signal processing deal with signals that are inherently time varying, nonlinear and complex-valued. The time varying, nonlinear characteristics of these signals can be effectively analyzed using artificial neural networks.  Furthermore, to efficiently preserve the physical characteristics of these complex-valued signals, it is important to develop complex-valued neural networks and derive their learning algorithms to represent these signals at every step of the learning process. This monograph comprises a collection of new supervised learning algorithms along with novel architectures for complex-valued neural networks. The concepts of meta-cognition equipped with a self-regulated learning have been known to be the best human learning strategy. In this monograph, the principles of meta-cognition have been introduced for complex-valued neural networks in both the batch and sequential learning modes. For applications where the computati...

  6. Analysis of surface ozone using a recurrent neural network.

    Science.gov (United States)

    Biancofiore, Fabio; Verdecchia, Marco; Di Carlo, Piero; Tomassetti, Barbara; Aruffo, Eleonora; Busilacchio, Marcella; Bianco, Sebastiano; Di Tommaso, Sinibaldo; Colangeli, Carlo

    2015-05-01

    Hourly concentrations of ozone (O₃) and nitrogen dioxide (NO₂) have been measured for 16 years, from 1998 to 2013, in a seaside town in central Italy. The seasonal trends of O₃ and NO₂ recorded in this period have been studied. Furthermore, we used the data collected during one year (2005), to define the characteristics of a multiple linear regression model and a neural network model. Both models are used to model the hourly O₃ concentration, using, two scenarios: 1) in the first as inputs, only meteorological parameters and 2) in the second adding photochemical parameters at those of the first scenario. In order to evaluate the performance of the model four statistical criteria are used: correlation coefficient, fractional bias, normalized mean squared error and a factor of two. All the criteria show that the neural network gives better results, compared to the regression model, in all the model scenarios. Predictions of O₃ have been carried out by many authors using a feed forward neural architecture. In this paper we show that a recurrent architecture significantly improves the performances of neural predictors. Using only the meteorological parameters as input, the recurrent architecture shows performance better than the multiple linear regression model that uses meteorological and photochemical data as input, making the neural network model with recurrent architecture a more useful tool in areas where only weather measurements are available. Finally, we used the neural network model to forecast the O₃ hourly concentrations 1, 3, 6, 12, 24 and 48 h ahead. The performances of the model in predicting O₃ levels are discussed. Emphasis is given to the possibility of using the neural network model in operational ways in areas where only meteorological data are available, in order to predict O₃ also in sites where it has not been measured yet. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Recurrent Artificial Neural Networks and Finite State Natural Language Processing.

    Science.gov (United States)

    Moisl, Hermann

    It is argued that pessimistic assessments of the adequacy of artificial neural networks (ANNs) for natural language processing (NLP) on the grounds that they have a finite state architecture are unjustified, and that their adequacy in this regard is an empirical issue. First, arguments that counter standard objections to finite state NLP on the…

  8. A neural network based seafloor classification using acoustic backscatter

    Digital Repository Service at National Institute of Oceanography (India)

    Chakraborty, B.

    This paper presents a study results of the Artificial Neural Network (ANN) architectures [Self-Organizing Map (SOM) and Multi-Layer Perceptron (MLP)] using single beam echosounding data. The single beam echosounder, operable at 12 kHz, has been used...

  9. On the use of a pruning prior for neural networks

    DEFF Research Database (Denmark)

    Goutte, Cyril

    1996-01-01

    We address the problem of using a regularization prior that prunes unnecessary weights in a neural network architecture. This prior provides a convenient alternative to traditional weight-decay. Two examples are studied to support this method and illustrate its use. First we use the sunspots...

  10. Quantum computing in neural networks

    CERN Document Server

    Gralewicz, P

    2004-01-01

    According to the statistical interpretation of quantum theory, quantum computers form a distinguished class of probabilistic machines (PMs) by encoding n qubits in 2n pbits. This raises the possibility of a large-scale quantum computing using PMs, especially with neural networks which have the innate capability for probabilistic information processing. Restricting ourselves to a particular model, we construct and numerically examine the performance of neural circuits implementing universal quantum gates. A discussion on the physiological plausibility of proposed coding scheme is also provided.

  11. Neural networks for perception human and machine perception

    CERN Document Server

    Wechsler, Harry

    1991-01-01

    Neural Networks for Perception, Volume 1: Human and Machine Perception focuses on models for understanding human perception in terms of distributed computation and examples of PDP models for machine perception. This book addresses both theoretical and practical issues related to the feasibility of both explaining human perception and implementing machine perception in terms of neural network models. The book is organized into two parts. The first part focuses on human perception. Topics on network model ofobject recognition in human vision, the self-organization of functional architecture in t

  12. A convolutional neural network neutrino event classifier

    Science.gov (United States)

    Aurisano, A.; Radovic, A.; Rocco, D.; Himmel, A.; Messier, M. D.; Niner, E.; Pawloski, G.; Psihas, F.; Sousa, A.; Vahle, P.

    2016-09-01

    Convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology without the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.

  13. A Convolutional Neural Network Neutrino Event Classifier

    CERN Document Server

    Aurisano, A; Rocco, D; Himmel, A; Messier, M D; Niner, E; Pawloski, G; Psihas, F; Sousa, A; Vahle, P

    2016-01-01

    Convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology without the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.

  14. Optical implementation of neural networks

    Science.gov (United States)

    Yu, Francis T. S.; Guo, Ruyan

    2002-12-01

    An adaptive optical neuro-computing (ONC) using inexpensive pocket size liquid crystal televisions (LCTVs) had been developed by the graduate students in the Electro-Optics Laboratory at The Pennsylvania State University. Although this neuro-computing has only 8×8=64 neurons, it can be easily extended to 16×20=320 neurons. The major advantages of this LCTV architecture as compared with other reported ONCs, are low cost and the flexibility to operate. To test the performance, several neural net models are used. These models are Interpattern Association, Hetero-association and unsupervised learning algorithms. The system design considerations and experimental demonstrations are also included.

  15. Discontinuities in recurrent neural networks.

    Science.gov (United States)

    Gavaldá, R; Siegelmann, H T

    1999-04-01

    This article studies the computational power of various discontinuous real computational models that are based on the classical analog recurrent neural network (ARNN). This ARNN consists of finite number of neurons; each neuron computes a polynomial net function and a sigmoid-like continuous activation function. We introduce arithmetic networks as ARNN augmented with a few simple discontinuous (e.g., threshold or zero test) neurons. We argue that even with weights restricted to polynomial time computable reals, arithmetic networks are able to compute arbitrarily complex recursive functions. We identify many types of neural networks that are at least as powerful as arithmetic nets, some of which are not in fact discontinuous, but they boost other arithmetic operations in the net function (e.g., neurons that can use divisions and polynomial net functions inside sigmoid-like continuous activation functions). These arithmetic networks are equivalent to the Blum-Shub-Smale model, when the latter is restricted to a bounded number of registers. With respect to implementation on digital computers, we show that arithmetic networks with rational weights can be simulated with exponential precision, but even with polynomial-time computable real weights, arithmetic networks are not subject to any fixed precision bounds. This is in contrast with the ARNN that are known to demand precision that is linear in the computation time. When nontrivial periodic functions (e.g., fractional part, sine, tangent) are added to arithmetic networks, the resulting networks are computationally equivalent to a massively parallel machine. Thus, these highly discontinuous networks can solve the presumably intractable class of PSPACE-complete problems in polynomial time.

  16. Neural Architecture of Auditory Object Categorization

    Directory of Open Access Journals (Sweden)

    Yune-Sang Lee

    2011-10-01

    Full Text Available We can identify objects by sight or by sound, yet far less is known about auditory object recognition than about visual recognition. Any exemplar of a dog (eg, a picture can be recognized on multiple categorical levels (eg, animal, dog, poodle. Using fMRI combined with machine-learning techniques, we studied these levels of categorization with sounds rather than images. Subjects heard sounds of various animate and inanimate objects, and unrecognizable control sounds. We report four primary findings: (1 some distinct brain regions selectively coded for basic (“dog” versus superordinate (“animal” categorization; (2 classification at the basic level entailed more extended cortical networks than those for superordinate categorization; (3 human voices were recognized far better by multiple brain regions than were any other sound categories; (4 regions beyond temporal lobe auditory areas were able to distinguish and categorize auditory objects. We conclude that multiple representations of an object exist at different categorical levels. This neural instantiation of object categories is distributed across multiple brain regions, including so-called “visual association areas,” indicating that these regions support object knowledge even when the input is auditory. Moreover, our findings appear to conflict with prior well-established theories of category-specific modules in the brain.

  17. Neutron spectrum unfolding using radial basis function neural networks.

    Science.gov (United States)

    Alvar, Amin Asgharzadeh; Deevband, Mohammad Reza; Ashtiyani, Meghdad

    2017-07-26

    Neutron energy spectrum unfolding has been the subject of research for several years. The Bayesian theory, Monte Carlo simulation, and iterative methods are some of the methods that have been used for neutron spectrum unfolding. In this study, the radial basis function (RBF), multilayer perceptron, and artificial neural networks (ANNs) were used for the unfolding of neutron spectrum, and a comparison was made between the networks' results. Both neural network architectures were trained and tested using the same data set for neutron spectrum unfolding from the response of LiI detectors with Eu impurity. Advantages of each ANN method in the unfolding of neutron energy spectrum were investigated, and the performance of the networks was compared. The results obtained showed that RBF neural network can be applied as an effective method for unfolding neutron spectrum, especially when the main target is the neutron dosimetry. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Ideomotor feedback control in a recurrent neural network.

    Science.gov (United States)

    Galtier, Mathieu

    2015-06-01

    The architecture of a neural network controlling an unknown environment is presented. It is based on a randomly connected recurrent neural network from which both perception and action are simultaneously read and fed back. There are two concurrent learning rules implementing a sort of ideomotor control: (i) perception is learned along the principle that the network should predict reliably its incoming stimuli; (ii) action is learned along the principle that the prediction of the network should match a target time series. The coherent behavior of the neural network in its environment is a consequence of the interaction between the two principles. Numerical simulations show a promising performance of the approach, which can be turned into a local and better "biologically plausible" algorithm.

  19. Using fuzzy logic to integrate neural networks and knowledge-based systems

    Science.gov (United States)

    Yen, John

    1991-01-01

    Outlined here is a novel hybrid architecture that uses fuzzy logic to integrate neural networks and knowledge-based systems. The author's approach offers important synergistic benefits to neural nets, approximate reasoning, and symbolic processing. Fuzzy inference rules extend symbolic systems with approximate reasoning capabilities, which are used for integrating and interpreting the outputs of neural networks. The symbolic system captures meta-level information about neural networks and defines its interaction with neural networks through a set of control tasks. Fuzzy action rules provide a robust mechanism for recognizing the situations in which neural networks require certain control actions. The neural nets, on the other hand, offer flexible classification and adaptive learning capabilities, which are crucial for dynamic and noisy environments. By combining neural nets and symbolic systems at their system levels through the use of fuzzy logic, the author's approach alleviates current difficulties in reconciling differences between low-level data processing mechanisms of neural nets and artificial intelligence systems.

  20. Fuzzy logic systems are equivalent to feedforward neural networks

    Institute of Scientific and Technical Information of China (English)

    李洪兴

    2000-01-01

    Fuzzy logic systems and feedforward neural networks are equivalent in essence. First, interpolation representations of fuzzy logic systems are introduced and several important conclusions are given. Then three important kinds of neural networks are defined, i.e. linear neural networks, rectangle wave neural networks and nonlinear neural networks. Then it is proved that nonlinear neural networks can be represented by rectangle wave neural networks. Based on the results mentioned above, the equivalence between fuzzy logic systems and feedforward neural networks is proved, which will be very useful for theoretical research or applications on fuzzy logic systems or neural networks by means of combining fuzzy logic systems with neural networks.

  1. Fiber optic Adaline neural networks

    Science.gov (United States)

    Ghosh, Anjan K.; Trepka, Jim; Paparao, Palacharla

    1993-02-01

    Optoelectronic realization of adaptive filters and equalizers using fiber optic tapped delay lines and spatial light modulators has been discussed recently. We describe the design of a single layer fiber optic Adaline neural network which can be used as a bit pattern classifier. In our realization we employ as few electronic devices as possible and use optical computation to utilize the advantages of optics in processing speed, parallelism, and interconnection. The new optical neural network described in this paper is designed for optical processing of guided lightwave signals, not electronic signals. We analyzed the convergence or learning characteristics of the optically implemented Adaline in the presence of errors in the hardware, and we studied methods for improving the convergence rate of the Adaline.

  2. Neural Networks Methodology and Applications

    CERN Document Server

    Dreyfus, Gérard

    2005-01-01

    Neural networks represent a powerful data processing technique that has reached maturity and broad application. When clearly understood and appropriately used, they are a mandatory component in the toolbox of any engineer who wants make the best use of the available data, in order to build models, make predictions, mine data, recognize shapes or signals, etc. Ranging from theoretical foundations to real-life applications, this book is intended to provide engineers and researchers with clear methodologies for taking advantage of neural networks in industrial, financial or banking applications, many instances of which are presented in the book. For the benefit of readers wishing to gain deeper knowledge of the topics, the book features appendices that provide theoretical details for greater insight, and algorithmic details for efficient programming and implementation. The chapters have been written by experts ands seemlessly edited to present a coherent and comprehensive, yet not redundant, practically-oriented...

  3. Neural Networks for Speech Application.

    Science.gov (United States)

    1987-11-01

    operation and neurocrience theories of how neurons process information in the brain. design. Early studies by McCulloch and Pitts dunng the forties led to...developed the commercially available Mark III and Mark IV neurocom- established by McCulloch and Pits. puters that model neural networks and run...ORGANIZERS Infonuiaonienes (1986) FOR Lashley, K. Brain Mehaius and Cblali (129)SPEECHOTECH 󈨜 McCullch. W and Pitts . W, ’A Logical Calculusof the

  4. Analog electronic neural network circuits

    Energy Technology Data Exchange (ETDEWEB)

    Graf, H.P.; Jackel, L.D. (AT and T Bell Labs., Holmdel, NJ (USA))

    1989-07-01

    The large interconnectivity and moderate precision required in neural network models present new opportunities for analog computing. This paper discusses analog circuits for a variety of problems such as pattern matching, optimization, and learning. Most of the circuits build so far are relatively small, exploratory designs. The most mature circuits are those for template matching. Chips performing this function are now being applied to pattern recognition problems.

  5. Representational Distance Learning for Deep Neural Networks.

    Science.gov (United States)

    McClure, Patrick; Kriegeskorte, Nikolaus

    2016-01-01

    Deep neural networks (DNNs) provide useful models of visual representational transformations. We present a method that enables a DNN (student) to learn from the internal representational spaces of a reference model (teacher), which could be another DNN or, in the future, a biological brain. Representational spaces of the student and the teacher are characterized by representational distance matrices (RDMs). We propose representational distance learning (RDL), a stochastic gradient descent method that drives the RDMs of the student to approximate the RDMs of the teacher. We demonstrate that RDL is competitive with other transfer learning techniques for two publicly available benchmark computer vision datasets (MNIST and CIFAR-100), while allowing for architectural differences between student and teacher. By pulling the student's RDMs toward those of the teacher, RDL significantly improved visual classification performance when compared to baseline networks that did not use transfer learning. In the future, RDL may enable combined supervised training of deep neural networks using task constraints (e.g., images and category labels) and constraints from brain-activity measurements, so as to build models that replicate the internal representational spaces of biological brains.

  6. A fraud management system architecture for next-generation networks.

    Science.gov (United States)

    Bihina Bella, M A; Eloff, J H P; Olivier, M S

    2009-03-10

    This paper proposes an original architecture for a fraud management system (FMS) for convergent. Next-generation networks (NGNs), which are based on the Internet protocol (IP). The architecture has the potential to satisfy the requirements of flexibility and application-independency for effective fraud detection in NGNs that cannot be met by traditional FMSs. The proposed architecture has a thorough four-stage detection process that analyses billing records in IP detail record (IPDR) format - an emerging IP-based billing standard - for signs of fraud. Its key feature is its usage of neural networks in the form of self-organising maps (SOMs) to help uncover unknown NGN fraud scenarios. A prototype was implemented to test the effectiveness of using a SOM for fraud detection and is also described in the paper.

  7. The LILARTI neural network system

    Energy Technology Data Exchange (ETDEWEB)

    Allen, J.D. Jr.; Schell, F.M.; Dodd, C.V.

    1992-10-01

    The material of this Technical Memorandum is intended to provide the reader with conceptual and technical background information on the LILARTI neural network system of detail sufficient to confer an understanding of the LILARTI method as it is presently allied and to facilitate application of the method to problems beyond the scope of this document. Of particular importance in this regard are the descriptive sections and the Appendices which include operating instructions, partial listings of program output and data files, and network construction information.

  8. Process Neural Networks Theory and Applications

    CERN Document Server

    He, Xingui

    2010-01-01

    "Process Neural Networks - Theory and Applications" proposes the concept and model of a process neural network for the first time, showing how it expands the mapping relationship between the input and output of traditional neural networks, and enhancing the expression capability for practical problems, with broad applicability to solving problems relating to process in practice. Some theoretical problems such as continuity, functional approximation capability, and computing capability, are strictly proved. The application methods, network construction principles, and optimization alg

  9. Neural network subtyping of depression.

    Science.gov (United States)

    Florio, T M; Parker, G; Austin, M P; Hickie, I; Mitchell, P; Wilhelm, K

    1998-10-01

    To examine the applicability of a neural network classification strategy to examine the independent contribution of psychomotor disturbance (PMD) and endogeneity symptoms to the DSM-III-R definition of melancholia. We studied 407 depressed patients with the clinical dataset comprising 17 endogeneity symptoms and the 18-item CORE measure of behaviourally rated PMD. A multilayer perception neural network was used to fit non-linear models of varying complexity. A linear discriminant function analysis was also used to generate a model for comparison with the non-linear models. Models (linear and non-linear) using PMD items only and endogeneity symptoms only had similar rates of successful classification, while non-linear models combining both PMD and symptoms scores achieved the best classifications. Our current non-linear model was superior to a linear analysis, a finding which may have wider application to psychiatric classification. Our non-linear analysis of depressive subtypes supports the binary view that melancholic and non-melancholic depression are separate clinical disorders rather than different forms of the same entity. This study illustrates how non-linear modelling with neural networks is a potentially fruitful approach to the study of the diagnostic taxonomy of psychiatric disorders and to clinical decision-making.

  10. HL-2A tokamak disruption forecasting based on an artificial neural network

    Institute of Scientific and Technical Information of China (English)

    Wang Hao; Wang Ai-Ke; Yang Qing-Wei; Ding Xuan-Tong; Dong Jia-Qi; Sanuki H; Itoh K

    2007-01-01

    Artificial neural networks are trained to forecast the plasma disruption in HL-2A tokamak. Optimized network architecture is obtained. Saliency analysis is made to assess the relative importance of different diagnostic signals as network input. The trained networks can successfully detect the disruptive pulses of HL-2A tokamak. The results obtained show the possibiliry of developing a neural network predictor that intervenes well in edvance for avoiding plasma disruption or mitigating its effects.

  11. Neural network based dynamic controllers for industrial robots.

    Science.gov (United States)

    Oh, S Y; Shin, W C; Kim, H G

    1995-09-01

    The industrial robot's dynamic performance is frequently measured by positioning accuracy at high speeds and a good dynamic controller is essential that can accurately compute robot dynamics at a servo rate high enough to ensure system stability. A real-time dynamic controller for an industrial robot is developed here using neural networks. First, an efficient time-selectable hidden layer architecture has been developed based on system dynamics localized in time, which lends itself to real-time learning and control along with enhanced mapping accuracy. Second, the neural network architecture has also been specially tuned to accommodate servo dynamics. This not only facilitates the system design through reduced sensing requirements for the controller but also enhances the control performance over the control architecture neglecting servo dynamics. Experimental results demonstrate the controller's excellent learning and control performances compared with a conventional controller and thus has good potential for practical use in industrial robots.

  12. Novel quantum inspired binary neural network algorithm

    Indian Academy of Sciences (India)

    OM PRAKASH PATEL; ARUNA TIWARI

    2016-11-01

    In this paper, a quantum based binary neural network algorithm is proposed, named as novel quantum binary neural network algorithm (NQ-BNN). It forms a neural network structure by deciding weights and separability parameter in quantum based manner. Quantum computing concept represents solution probabilistically and gives large search space to find optimal value of required parameters using Gaussian random number generator. The neural network structure forms constructively having three number of layers input layer: hidden layer and output layer. A constructive way of deciding the network eliminates the unnecessary training of neural network. A new parameter that is a quantum separability parameter (QSP) is introduced here, which finds an optimal separability plane to classify input samples. During learning, it searches for an optimal separability plane. This parameter is taken as the threshold of neuron for learning of neural network. This algorithm is tested with three benchmark datasets and produces improved results than existing quantum inspired and other classification approaches.

  13. Perspective: network-guided pattern formation of neural dynamics.

    Science.gov (United States)

    Hütt, Marc-Thorsten; Kaiser, Marcus; Hilgetag, Claus C

    2014-10-05

    The understanding of neural activity patterns is fundamentally linked to an understanding of how the brain's network architecture shapes dynamical processes. Established approaches rely mostly on deviations of a given network from certain classes of random graphs. Hypotheses about the supposed role of prominent topological features (for instance, the roles of modularity, network motifs or hierarchical network organization) are derived from these deviations. An alternative strategy could be to study deviations of network architectures from regular graphs (rings and lattices) and consider the implications of such deviations for self-organized dynamic patterns on the network. Following this strategy, we draw on the theory of spatio-temporal pattern formation and propose a novel perspective for analysing dynamics on networks, by evaluating how the self-organized dynamics are confined by network architecture to a small set of permissible collective states. In particular, we discuss the role of prominent topological features of brain connectivity, such as hubs, modules and hierarchy, in shaping activity patterns. We illustrate the notion of network-guided pattern formation with numerical simulations and outline how it can facilitate the understanding of neural dynamics. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  14. An Architectural Modelfor Intelligent Network Management

    Institute of Scientific and Technical Information of China (English)

    罗军舟; 顾冠群; 费翔

    2000-01-01

    Traditional network management approach involves the management of each vendor's equipment and network segment in isolation through its own proprietary element management system. It is necessary to set up a new network management architecture that calls for operation consolidation across vendor and technology boundaries. In this paper, an architectural model for Intelligent Network Management (INM) is presented. The INM system includes a manager system, which controls all subsystems and coordinates different management tasks; an expert system, which is responsible for handling particularly difficult problems, and intelligent agents, which bring the management closer to applications and user requirements by spreading intelligent agents through network segments or domain. In the expert system model proposed, especially an intelligent fault management system is given.The architectural model is to build the INM system to meet the need of managing modern network systems.

  15. Practical neural network recipies in C++

    CERN Document Server

    Masters

    2014-01-01

    This text serves as a cookbook for neural network solutions to practical problems using C++. It will enable those with moderate programming experience to select a neural network model appropriate to solving a particular problem, and to produce a working program implementing that network. The book provides guidance along the entire problem-solving path, including designing the training set, preprocessing variables, training and validating the network, and evaluating its performance. Though the book is not intended as a general course in neural networks, no background in neural works is assum

  16. Identification and Control of Non-Linear Time-Varying Dynamical Systems Using Artificial Neural Networks

    Science.gov (United States)

    1992-09-01

    input. The architecture of artificial neural-network has three main levels: topological, data flow, and neurodynamics . The architectural and...and neurodynamics . The presentation here will follow the guidelines of Neural Computing by NeuralWare, Inc. [NC91], who developed the basic software... neurodynamics , describes in detail the operations that act upon the data within a processing element. This level defines the functions and the

  17. Understanding Neural Networks for Machine Learning using Microsoft Neural Network Algorithm

    National Research Council Canada - National Science Library

    Nagesh Ramprasad

    2016-01-01

    .... In this research, focus is on the Microsoft Neural System Algorithm. The Microsoft Neural System Algorithm is a simple implementation of the adaptable and popular neural networks that are used in the machine learning...

  18. Neural Network Machine Learning and Dimension Reduction for Data Visualization

    Science.gov (United States)

    Liles, Charles A.

    2014-01-01

    Neural network machine learning in computer science is a continuously developing field of study. Although neural network models have been developed which can accurately predict a numeric value or nominal classification, a general purpose method for constructing neural network architecture has yet to be developed. Computer scientists are often forced to rely on a trial-and-error process of developing and improving accurate neural network models. In many cases, models are constructed from a large number of input parameters. Understanding which input parameters have the greatest impact on the prediction of the model is often difficult to surmise, especially when the number of input variables is very high. This challenge is often labeled the "curse of dimensionality" in scientific fields. However, techniques exist for reducing the dimensionality of problems to just two dimensions. Once a problem's dimensions have been mapped to two dimensions, it can be easily plotted and understood by humans. The ability to visualize a multi-dimensional dataset can provide a means of identifying which input variables have the highest effect on determining a nominal or numeric output. Identifying these variables can provide a better means of training neural network models; models can be more easily and quickly trained using only input variables which appear to affect the outcome variable. The purpose of this project is to explore varying means of training neural networks and to utilize dimensional reduction for visualizing and understanding complex datasets.

  19. Neural network modeling of emotion

    Science.gov (United States)

    Levine, Daniel S.

    2007-03-01

    This article reviews the history and development of computational neural network modeling of cognitive and behavioral processes that involve emotion. The exposition starts with models of classical conditioning dating from the early 1970s. Then it proceeds toward models of interactions between emotion and attention. Then models of emotional influences on decision making are reviewed, including some speculative (not and not yet simulated) models of the evolution of decision rules. Through the late 1980s, the neural networks developed to model emotional processes were mainly embodiments of significant functional principles motivated by psychological data. In the last two decades, network models of these processes have become much more detailed in their incorporation of known physiological properties of specific brain regions, while preserving many of the psychological principles from the earlier models. Most network models of emotional processes so far have dealt with positive and negative emotion in general, rather than specific emotions such as fear, joy, sadness, and anger. But a later section of this article reviews a few models relevant to specific emotions: one family of models of auditory fear conditioning in rats, and one model of induced pleasure enhancing creativity in humans. Then models of emotional disorders are reviewed. The article concludes with philosophical statements about the essential contributions of emotion to intelligent behavior and the importance of quantitative theories and models to the interdisciplinary enterprise of understanding the interactions of emotion, cognition, and behavior.

  20. MEMBRAIN NEURAL NETWORK FOR VISUAL PATTERN RECOGNITION

    Directory of Open Access Journals (Sweden)

    Artur Popko

    2013-06-01

    Full Text Available Recognition of visual patterns is one of significant applications of Artificial Neural Networks, which partially emulate human thinking in the domain of artificial intelligence. In the paper, a simplified neural approach to recognition of visual patterns is portrayed and discussed. This paper is dedicated for investigators in visual patterns recognition, Artificial Neural Networking and related disciplines. The document describes also MemBrain application environment as a powerful and easy to use neural networks’ editor and simulator supporting ANN.

  1. Seafloor classification using acoustic backscatter echo-waveform - Artificial neural network applications

    Digital Repository Service at National Institute of Oceanography (India)

    Chakraborty, B.; Mahale, V.; Navelkar, G.S.; Desai, R.G.P.

    In this paper seafloor classifications system based on artificial neural network (ANN) has been designed. The ANN architecture employed here is a combination of Self Organizing Feature Map (SOFM) and Linear Vector Quantization (LVQ1). Currently...

  2. An Optimal Implementation on FPGA of a Hopfield Neural Network

    Directory of Open Access Journals (Sweden)

    W. Mansour

    2011-01-01

    Full Text Available The associative Hopfield memory is a form of recurrent Artificial Neural Network (ANN that can be used in applications such as pattern recognition, noise removal, information retrieval, and combinatorial optimization problems. This paper presents the implementation of the Hopfield Neural Network (HNN parallel architecture on a SRAM-based FPGA. The main advantage of the proposed implementation is its high performance and cost effectiveness: it requires O(1 multiplications and O(log⁡ N additions, whereas most others require O(N multiplications and O(N additions.

  3. Additive-Multiplicative Fuzzy Neural Network and Its Performance

    Institute of Scientific and Technical Information of China (English)

    翟东海; 靳蕃

    2003-01-01

    In view of the main weaknesses of current fuzzy neural networks such as low reasoning precision and long training time, an Additive-Multiplicative Fuzzy Neural Network (AMFNN) model and its architecture are presented. AMFNN combines additive inference and multiplicative inference into an integral whole, reasonably makes use of their advantages of inference and effectively overcomes their weaknesses when they are used for inference separately. Here, an error back propagation algorithm for AMFNN is presented based on the gradient descent method. Comparisons between the AMFNN and six representative fuzzy inference methods shows that the AMFNN is characterized by higher reasoning precision, wider application scope, stronger generalization capability and easier implementation.

  4. Template learning of cellular neural network using genetic programming.

    Science.gov (United States)

    Radwan, Elsayed; Tazaki, Eiichiro

    2004-08-01

    A new learning algorithm for space invariant Uncoupled Cellular Neural Network is introduced. Learning is formulated as an optimization problem. Genetic Programming has been selected for creating new knowledge because they allow the system to find new rules both near to good ones and far from them, looking for unknown good control actions. According to the lattice Cellular Neural Network architecture, Genetic Programming will be used in deriving the Cloning Template. Exploration of any stable domain is possible by the current approach. Details of the algorithm are discussed and several application results are shown.

  5. Hybrid architecture for building secure sensor networks

    Science.gov (United States)

    Owens, Ken R., Jr.; Watkins, Steve E.

    2012-04-01

    Sensor networks have various communication and security architectural concerns. Three approaches are defined to address these concerns for sensor networks. The first area is the utilization of new computing architectures that leverage embedded virtualization software on the sensor. Deploying a small, embedded virtualization operating system on the sensor nodes that is designed to communicate to low-cost cloud computing infrastructure in the network is the foundation to delivering low-cost, secure sensor networks. The second area focuses on securing the sensor. Sensor security components include developing an identification scheme, and leveraging authentication algorithms and protocols that address security assurance within the physical, communication network, and application layers. This function will primarily be accomplished through encrypting the communication channel and integrating sensor network firewall and intrusion detection/prevention components to the sensor network architecture. Hence, sensor networks will be able to maintain high levels of security. The third area addresses the real-time and high priority nature of the data that sensor networks collect. This function requires that a quality-of-service (QoS) definition and algorithm be developed for delivering the right data at the right time. A hybrid architecture is proposed that combines software and hardware features to handle network traffic with diverse QoS requirements.

  6. Salience-Affected Neural Networks

    CERN Document Server

    Remmelzwaal, Leendert A; Ellis, George F R

    2010-01-01

    We present a simple neural network model which combines a locally-connected feedforward structure, as is traditionally used to model inter-neuron connectivity, with a layer of undifferentiated connections which model the diffuse projections from the human limbic system to the cortex. This new layer makes it possible to model global effects such as salience, at the same time as the local network processes task-specific or local information. This simple combination network displays interactions between salience and regular processing which correspond to known effects in the developing brain, such as enhanced learning as a result of heightened affect. The cortex biases neuronal responses to affect both learning and memory, through the use of diffuse projections from the limbic system to the cortex. Standard ANNs do not model this non-local flow of information represented by the ascending systems, which are a significant feature of the structure of the brain, and although they do allow associational learning with...

  7. Dynamic Analysis of Structures Using Neural Networks

    Directory of Open Access Journals (Sweden)

    N. Ahmadi

    2008-01-01

    Full Text Available In the recent years, neural networks are considered as the best candidate for fast approximation with arbitrary accuracy in the time consuming problems. Dynamic analysis of structures against earthquake has the time consuming process. We employed two kinds of neural networks: Generalized Regression neural network (GR and Back-Propagation Wavenet neural network (BPW, for approximating of dynamic time history response of frame structures. GR is a traditional radial basis function neural network while BPW categorized as a wavelet neural network. In BPW, sigmoid activation functions of hidden layer neurons are substituted with wavelets and weights training are achieved using Scaled Conjugate Gradient (SCG algorithm. Comparison the results of BPW with those of GR in the dynamic analysis of eight story steel frame indicates that accuracy of the properly trained BPW was better than that of GR and therefore, BPW can be efficiently used for approximate dynamic analysis of structures.

  8. Segmented-memory recurrent neural networks.

    Science.gov (United States)

    Chen, Jinmiao; Chaudhari, Narendra S

    2009-08-01

    Conventional recurrent neural networks (RNNs) have difficulties in learning long-term dependencies. To tackle this problem, we propose an architecture called segmented-memory recurrent neural network (SMRNN). A symbolic sequence is broken into segments and then presented as inputs to the SMRNN one symbol per cycle. The SMRNN uses separate internal states to store symbol-level context, as well as segment-level context. The symbol-level context is updated for each symbol presented for input. The segment-level context is updated after each segment. The SMRNN is trained using an extended real-time recurrent learning algorithm. We test the performance of SMRNN on the information latching problem, the "two-sequence problem" and the problem of protein secondary structure (PSS) prediction. Our implementation results indicate that SMRNN performs better on long-term dependency problems than conventional RNNs. Besides, we also theoretically analyze how the segmented memory of SMRNN helps learning long-term temporal dependencies and study the impact of the segment length.

  9. Fast Algorithms for Convolutional Neural Networks

    OpenAIRE

    Lavin, Andrew; Gray, Scott

    2015-01-01

    Deep convolutional neural networks take GPU days of compute time to train on large data sets. Pedestrian detection for self driving cars requires very low latency. Image recognition for mobile phones is constrained by limited processing resources. The success of convolutional neural networks in these situations is limited by how fast we can compute them. Conventional FFT based convolution is fast for large filters, but state of the art convolutional neural networks use small, 3x3 filters. We ...

  10. Modelling Microwave Devices Using Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Andrius Katkevičius

    2012-04-01

    Full Text Available Artificial neural networks (ANN have recently gained attention as fast and flexible equipment for modelling and designing microwave devices. The paper reviews the opportunities to use them for undertaking the tasks on the analysis and synthesis. The article focuses on what tasks might be solved using neural networks, what challenges might rise when using artificial neural networks for carrying out tasks on microwave devices and discusses problem-solving techniques for microwave devices with intermittent characteristics.Article in Lithuanian

  11. Rule Extraction using Artificial Neural Networks

    OpenAIRE

    2010-01-01

    Artificial neural networks have been successfully applied to a variety of business application problems involving classification and regression. Although backpropagation neural networks generally predict better than decision trees do for pattern classification problems, they are often regarded as black boxes, i.e., their predictions are not as interpretable as those of decision trees. In many applications, it is desirable to extract knowledge from trained neural networks so that the users can...

  12. Adaptive optimization and control using neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Mead, W.C.; Brown, S.K.; Jones, R.D.; Bowling, P.S.; Barnes, C.W.

    1993-10-22

    Recent work has demonstrated the ability of neural-network-based controllers to optimize and control machines with complex, non-linear, relatively unknown control spaces. We present a brief overview of neural networks via a taxonomy illustrating some capabilities of different kinds of neural networks. We present some successful control examples, particularly the optimization and control of a small-angle negative ion source.

  13. Forecasting Exchange Rate Using Neural Networks

    OpenAIRE

    Raksaseree, Sukhita

    2009-01-01

    The artificial neural network models become increasingly popular among researchers and investors since many studies have shown that it has superior performance over the traditional statistical model. This paper aims to investigate the neural network performance in forecasting foreign exchange rates based on backpropagation algorithm. The forecast of Thai Baht against seven currencies are conducted to observe the performance of the neural network models using the performance criteria for both ...

  14. Semantic Interpretation of An Artificial Neural Network

    Science.gov (United States)

    1995-12-01

    ARTIFICIAL NEURAL NETWORK .7,’ THESIS Stanley Dale Kinderknecht Captain, USAF 770 DEAT7ET77,’H IR O C 7... ARTIFICIAL NEURAL NETWORK THESIS Stanley Dale Kinderknecht Captain, USAF AFIT/GCS/ENG/95D-07 Approved for public release; distribution unlimited The views...Government. AFIT/GCS/ENG/95D-07 SEMANTIC INTERPRETATION OF AN ARTIFICIAL NEURAL NETWORK THESIS Presented to the Faculty of the School of Engineering of

  15. Feature Weight Tuning for Recursive Neural Networks

    OpenAIRE

    2014-01-01

    This paper addresses how a recursive neural network model can automatically leave out useless information and emphasize important evidence, in other words, to perform "weight tuning" for higher-level representation acquisition. We propose two models, Weighted Neural Network (WNN) and Binary-Expectation Neural Network (BENN), which automatically control how much one specific unit contributes to the higher-level representation. The proposed model can be viewed as incorporating a more powerful c...

  16. Robust Convolutional Neural Networks for Image Recognition

    Directory of Open Access Journals (Sweden)

    Hayder M. Albeahdili

    2015-11-01

    Full Text Available Recently image recognition becomes vital task using several methods. One of the most interesting used methods is using Convolutional Neural Network (CNN. It is widely used for this purpose. However, since there are some tasks that have small features that are considered an essential part of a task, then classification using CNN is not efficient because most of those features diminish before reaching the final stage of classification. In this work, analyzing and exploring essential parameters that can influence model performance. Furthermore different elegant prior contemporary models are recruited to introduce new leveraging model. Finally, a new CNN architecture is proposed which achieves state-of-the-art classification results on the different challenge benchmarks. The experimented are conducted on MNIST, CIFAR-10, and CIFAR-100 datasets. Experimental results showed that the results outperform and achieve superior results comparing to the most contemporary approaches.

  17. Artificial Neural Networks, Symmetries and Differential Evolution

    CERN Document Server

    Urfalioglu, Onay

    2010-01-01

    Neuroevolution is an active and growing research field, especially in times of increasingly parallel computing architectures. Learning methods for Artificial Neural Networks (ANN) can be divided into two groups. Neuroevolution is mainly based on Monte-Carlo techniques and belongs to the group of global search methods, whereas other methods such as backpropagation belong to the group of local search methods. ANN's comprise important symmetry properties, which can influence Monte-Carlo methods. On the other hand, local search methods are generally unaffected by these symmetries. In the literature, dealing with the symmetries is generally reported as being not effective or even yielding inferior results. In this paper, we introduce the so called Minimum Global Optimum Proximity principle derived from theoretical considerations for effective symmetry breaking, applied to offline supervised learning. Using Differential Evolution (DE), which is a popular and robust evolutionary global optimization method, we experi...

  18. Robust smile detection using convolutional neural networks

    Science.gov (United States)

    Bianco, Simone; Celona, Luigi; Schettini, Raimondo

    2016-11-01

    We present a fully automated approach for smile detection. Faces are detected using a multiview face detector and aligned and scaled using automatically detected eye locations. Then, we use a convolutional neural network (CNN) to determine whether it is a smiling face or not. To this end, we investigate different shallow CNN architectures that can be trained even when the amount of learning data is limited. We evaluate our complete processing pipeline on the largest publicly available image database for smile detection in an uncontrolled scenario. We investigate the robustness of the method to different kinds of geometric transformations (rotation, translation, and scaling) due to imprecise face localization, and to several kinds of distortions (compression, noise, and blur). To the best of our knowledge, this is the first time that this type of investigation has been performed for smile detection. Experimental results show that our proposal outperforms state-of-the-art methods on both high- and low-quality images.

  19. Advanced neural network-based computational schemes for robust fault diagnosis

    CERN Document Server

    Mrugalski, Marcin

    2014-01-01

    The present book is devoted to problems of adaptation of artificial neural networks to robust fault diagnosis schemes. It presents neural networks-based modelling and estimation techniques used for designing robust fault diagnosis schemes for non-linear dynamic systems. A part of the book focuses on fundamental issues such as architectures of dynamic neural networks, methods for designing of neural networks and fault diagnosis schemes as well as the importance of robustness. The book is of a tutorial value and can be perceived as a good starting point for the new-comers to this field. The book is also devoted to advanced schemes of description of neural model uncertainty. In particular, the methods of computation of neural networks uncertainty with robust parameter estimation are presented. Moreover, a novel approach for system identification with the state-space GMDH neural network is delivered. All the concepts described in this book are illustrated by both simple academic illustrative examples and practica...

  20. Neural networks for nuclear spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Keller, P.E.; Kangas, L.J.; Hashem, S.; Kouzes, R.T. [Pacific Northwest Lab., Richland, WA (United States)] [and others

    1995-12-31

    In this paper two applications of artificial neural networks (ANNs) in nuclear spectroscopy analysis are discussed. In the first application, an ANN assigns quality coefficients to alpha particle energy spectra. These spectra are used to detect plutonium contamination in the work environment. The quality coefficients represent the levels of spectral degradation caused by miscalibration and foreign matter affecting the instruments. A set of spectra was labeled with quality coefficients by an expert and used to train the ANN expert system. Our investigation shows that the expert knowledge of spectral quality can be transferred to an ANN system. The second application combines a portable gamma-ray spectrometer with an ANN. In this system the ANN is used to automatically identify, radioactive isotopes in real-time from their gamma-ray spectra. Two neural network paradigms are examined: the linear perception and the optimal linear associative memory (OLAM). A comparison of the two paradigms shows that OLAM is superior to linear perception for this application. Both networks have a linear response and are useful in determining the composition of an unknown sample when the spectrum of the unknown is a linear superposition of known spectra. One feature of this technique is that it uses the whole spectrum in the identification process instead of only the individual photo-peaks. For this reason, it is potentially more useful for processing data from lower resolution gamma-ray spectrometers. This approach has been tested with data generated by Monte Carlo simulations and with field data from sodium iodide and Germanium detectors. With the ANN approach, the intense computation takes place during the training process. Once the network is trained, normal operation consists of propagating the data through the network, which results in rapid identification of samples. This approach is useful in situations that require fast response where precise quantification is less important.

  1. Neural Networks for Rapid Design and Analysis

    Science.gov (United States)

    Sparks, Dean W., Jr.; Maghami, Peiman G.

    1998-01-01

    Artificial neural networks have been employed for rapid and efficient dynamics and control analysis of flexible systems. Specifically, feedforward neural networks are designed to approximate nonlinear dynamic components over prescribed input ranges, and are used in simulations as a means to speed up the overall time response analysis process. To capture the recursive nature of dynamic components with artificial neural networks, recurrent networks, which use state feedback with the appropriate number of time delays, as inputs to the networks, are employed. Once properly trained, neural networks can give very good approximations to nonlinear dynamic components, and by their judicious use in simulations, allow the analyst the potential to speed up the analysis process considerably. To illustrate this potential speed up, an existing simulation model of a spacecraft reaction wheel system is executed, first conventionally, and then with an artificial neural network in place.

  2. Systolic implementation of neural networks

    Energy Technology Data Exchange (ETDEWEB)

    De Groot, A.J.; Parker, S.R.

    1989-01-01

    The backpropagation algorithm for error gradient calculations in multilayer, feed-forward neural networks is derived in matrix form involving inner and outer products. It is demonstrated that these calculations can be carried out efficiently using systolic processing techniques, particularly using the SPRINT, a 64-element systolic processor developed at Lawrence Livermore National Laboratory. This machine contains one million synapses, and forward-propagates 12 million connections per second, using 100 watts of power. When executing the algorithm, each SPRINT processor performs useful work 97% of the time. The theory and applications are confirmed by some nontrivial examples involving seismic signal recognition. 4 refs., 7 figs.

  3. Magnitude Sensitive Competitive Neural Networks

    OpenAIRE

    Pelayo Campillos, Enrique; Buldain Pérez, David; Orrite Uruñuela, Carlos

    2014-01-01

    En esta Tesis se presentan un conjunto de redes neuronales llamadas Magnitude Sensitive Competitive Neural Networks (MSCNNs). Se trata de un conjunto de algoritmos de Competitive Learning que incluyen un término de magnitud como un factor de modulación de la distancia usada en la competición. Al igual que otros métodos competitivos, MSCNNs realizan la cuantización vectorial de los datos, pero el término de magnitud guía el entrenamiento de los centroides de modo que se representan con alto de...

  4. Hybrid multiobjective evolutionary design for artificial neural networks.

    Science.gov (United States)

    Goh, Chi-Keong; Teoh, Eu-Jin; Tan, Kay Chen

    2008-09-01

    Evolutionary algorithms are a class of stochastic search methods that attempts to emulate the biological process of evolution, incorporating concepts of selection, reproduction, and mutation. In recent years, there has been an increase in the use of evolutionary approaches in the training of artificial neural networks (ANNs). While evolutionary techniques for neural networks have shown to provide superior performance over conventional training approaches, the simultaneous optimization of network performance and architecture will almost always result in a slow training process due to the added algorithmic complexity. In this paper, we present a geometrical measure based on the singular value decomposition (SVD) to estimate the necessary number of neurons to be used in training a single-hidden-layer feedforward neural network (SLFN). In addition, we develop a new hybrid multiobjective evolutionary approach that includes the features of a variable length representation that allow for easy adaptation of neural networks structures, an architectural recombination procedure based on the geometrical measure that adapts the number of necessary hidden neurons and facilitates the exchange of neuronal information between candidate designs, and a microhybrid genetic algorithm ( microHGA) with an adaptive local search intensity scheme for local fine-tuning. In addition, the performances of well-known algorithms as well as the effectiveness and contributions of the proposed approach are analyzed and validated through a variety of data set types.

  5. Smart business networks: architectural aspects and risks

    NARCIS (Netherlands)

    L-F. Pau (Louis-François)

    2004-01-01

    textabstractThis paper summarizes key attributes and the uniqueness of smart business networks [1], to propose thereafter an operational implementation architecture. It involves, amongst others, the embedding of business logic specific to a network of business partners, inside the communications con

  6. Network architecture functional description and design

    Energy Technology Data Exchange (ETDEWEB)

    Stans, L.; Bencoe, M.; Brown, D.; Kelly, S.; Pierson, L.; Schaldach, C.

    1989-05-25

    This report provides a top level functional description and design for the development and implementation of the central network to support the next generation of SNL, Albuquerque supercomputer in a UNIX{reg sign} environment. It describes the network functions and provides an architecture and topology.

  7. Neural Network Controlled Visual Saccades

    Science.gov (United States)

    Johnson, Jeffrey D.; Grogan, Timothy A.

    1989-03-01

    The paper to be presented will discuss research on a computer vision system controlled by a neural network capable of learning through classical (Pavlovian) conditioning. Through the use of unconditional stimuli (reward and punishment) the system will develop scan patterns of eye saccades necessary to differentiate and recognize members of an input set. By foveating only those portions of the input image that the system has found to be necessary for recognition the drawback of computational explosion as the size of the input image grows is avoided. The model incorporates many features found in animal vision systems, and is governed by understandable and modifiable behavior patterns similar to those reported by Pavlov in his classic study. These behavioral patterns are a result of a neuronal model, used in the network, explicitly designed to reproduce this behavior.

  8. Widrow-cellular neural network and optoelectronic implementation

    Science.gov (United States)

    Bal, Abdullah

    A new type of optoelectronic cellular neural network has been developed by providing the capability of coefficients adjusment of cellular neural network (CNN) using Widrow based perceptron learning algorithm. The new supervised cellular neural network is called Widrow-CNN. Despite the unsupervised CNN, the proposed learning algorithm allows to use the Widrow-CNN for various image processing applications easily. Also, the capability of CNN for image processing and feature extraction has been improved using basic joint transform correlation architecture. This hardware application presents high speed processing capability compared to digital applications. The optoelectronic Widrow-CNN has been tested for classic CNN feature extraction problems. It yields the best results even in case of hard feature extraction problems such as diagonal line detection and vertical line determination.

  9. UMA/GAN network architecture analysis

    Science.gov (United States)

    Yang, Liang; Li, Wensheng; Deng, Chunjian; Lv, Yi

    2009-07-01

    This paper is to critically analyze the architecture of UMA which is one of Fix Mobile Convergence (FMC) solutions, and also included by the third generation partnership project(3GPP). In UMA/GAN network architecture, UMA Network Controller (UNC) is the key equipment which connects with cellular core network and mobile station (MS). UMA network could be easily integrated into the existing cellular networks without influencing mobile core network, and could provides high-quality mobile services with preferentially priced indoor voice and data usage. This helps to improve subscriber's experience. On the other hand, UMA/GAN architecture helps to integrate other radio technique into cellular network which includes WiFi, Bluetooth, and WiMax and so on. This offers the traditional mobile operators an opportunity to integrate WiMax technique into cellular network. In the end of this article, we also give an analysis of potential influence on the cellular core networks ,which is pulled by UMA network.

  10. Self-organization in neural networks - Applications in structural optimization

    Science.gov (United States)

    Hajela, Prabhat; Fu, B.; Berke, Laszlo

    1993-01-01

    The present paper discusses the applicability of ART (Adaptive Resonance Theory) networks, and the Hopfield and Elastic networks, in problems of structural analysis and design. A characteristic of these network architectures is the ability to classify patterns presented as inputs into specific categories. The categories may themselves represent distinct procedural solution strategies. The paper shows how this property can be adapted in the structural analysis and design problem. A second application is the use of Hopfield and Elastic networks in optimization problems. Of particular interest are problems characterized by the presence of discrete and integer design variables. The parallel computing architecture that is typical of neural networks is shown to be effective in such problems. Results of preliminary implementations in structural design problems are also included in the paper.

  11. Video Traffic Prediction Using Neural Networks

    Directory of Open Access Journals (Sweden)

    Miloš Oravec

    2008-10-01

    Full Text Available In this paper, we consider video stream prediction for application in services likevideo-on-demand, videoconferencing, video broadcasting, etc. The aim is to predict thevideo stream for an efficient bandwidth allocation of the video signal. Efficient predictionof traffic generated by multimedia sources is an important part of traffic and congestioncontrol procedures at the network edges. As a tool for the prediction, we use neuralnetworks – multilayer perceptron (MLP, radial basis function networks (RBF networksand backpropagation through time (BPTT neural networks. At first, we briefly introducetheoretical background of neural networks, the prediction methods and the differencebetween them. We propose also video time-series processing using moving averages.Simulation results for each type of neural network together with final comparisons arepresented. For comparison purposes, also conventional (non-neural prediction isincluded. The purpose of our work is to construct suitable neural networks for variable bitrate video prediction and evaluate them. We use video traces from [1].

  12. Neural networks with discontinuous/impact activations

    CERN Document Server

    Akhmet, Marat

    2014-01-01

    This book presents as its main subject new models in mathematical neuroscience. A wide range of neural networks models with discontinuities are discussed, including impulsive differential equations, differential equations with piecewise constant arguments, and models of mixed type. These models involve discontinuities, which are natural because huge velocities and short distances are usually observed in devices modeling the networks. A discussion of the models, appropriate for the proposed applications, is also provided. This book also: Explores questions related to the biological underpinning for models of neural networks\\ Considers neural networks modeling using differential equations with impulsive and piecewise constant argument discontinuities Provides all necessary mathematical basics for application to the theory of neural networks Neural Networks with Discontinuous/Impact Activations is an ideal book for researchers and professionals in the field of engineering mathematics that have an interest in app...

  13. Neural Networks for Emotion Classification

    CERN Document Server

    Sun, Yafei

    2011-01-01

    It is argued that for the computer to be able to interact with humans, it needs to have the communication skills of humans. One of these skills is the ability to understand the emotional state of the person. This thesis describes a neural network-based approach for emotion classification. We learn a classifier that can recognize six basic emotions with an average accuracy of 77% over the Cohn-Kanade database. The novelty of this work is that instead of empirically selecting the parameters of the neural network, i.e. the learning rate, activation function parameter, momentum number, the number of nodes in one layer, etc. we developed a strategy that can automatically select comparatively better combination of these parameters. We also introduce another way to perform back propagation. Instead of using the partial differential of the error function, we use optimal algorithm; namely Powell's direction set to minimize the error function. We were also interested in construction an authentic emotion databases. This...

  14. Artificial neural networks in neurosurgery.

    Science.gov (United States)

    Azimi, Parisa; Mohammadi, Hasan Reza; Benzel, Edward C; Shahzadi, Sohrab; Azhari, Shirzad; Montazeri, Ali

    2015-03-01

    Artificial neural networks (ANNs) effectively analyze non-linear data sets. The aimed was A review of the relevant published articles that focused on the application of ANNs as a tool for assisting clinical decision-making in neurosurgery. A literature review of all full publications in English biomedical journals (1993-2013) was undertaken. The strategy included a combination of key words 'artificial neural networks', 'prognostic', 'brain', 'tumor tracking', 'head', 'tumor', 'spine', 'classification' and 'back pain' in the title and abstract of the manuscripts using the PubMed search engine. The major findings are summarized, with a focus on the application of ANNs for diagnostic and prognostic purposes. Finally, the future of ANNs in neurosurgery is explored. A total of 1093 citations were identified and screened. In all, 57 citations were found to be relevant. Of these, 50 articles were eligible for inclusion in this review. The synthesis of the data showed several applications of ANN in neurosurgery, including: (1) diagnosis and assessment of disease progression in low back pain, brain tumours and primary epilepsy; (2) enhancing clinically relevant information extraction from radiographic images, intracranial pressure processing, low back pain and real-time tumour tracking; (3) outcome prediction in epilepsy, brain metastases, lumbar spinal stenosis, lumbar disc herniation, childhood hydrocephalus, trauma mortality, and the occurrence of symptomatic cerebral vasospasm in patients with aneurysmal subarachnoid haemorrhage; (4) the use in the biomechanical assessments of spinal disease. ANNs can be effectively employed for diagnosis, prognosis and outcome prediction in neurosurgery.

  15. Virtualized cognitive network architecture for 5G cellular networks

    KAUST Repository

    Elsawy, Hesham

    2015-07-17

    Cellular networks have preserved an application agnostic and base station (BS) centric architecture1 for decades. Network functionalities (e.g. user association) are decided and performed regardless of the underlying application (e.g. automation, tactile Internet, online gaming, multimedia). Such an ossified architecture imposes several hurdles against achieving the ambitious metrics of next generation cellular systems. This article first highlights the features and drawbacks of such architectural ossification. Then the article proposes a virtualized and cognitive network architecture, wherein network functionalities are implemented via software instances in the cloud, and the underlying architecture can adapt to the application of interest as well as to changes in channels and traffic conditions. The adaptation is done in terms of the network topology by manipulating connectivities and steering traffic via different paths, so as to attain the applications\\' requirements and network design objectives. The article presents cognitive strategies to implement some of the classical network functionalities, along with their related implementation challenges. The article further presents a case study illustrating the performance improvement of the proposed architecture as compared to conventional cellular networks, both in terms of outage probability and handover rate.

  16. Combinatorial structures and processing in neural blackboard architectures

    NARCIS (Netherlands)

    van der Velde, Frank; van der Velde, Frank; de Kamps, Marc; Besold, Tarek R.; d'Avila Garcez, Artur; Marcus, Gary F.; Miikkulainen, Risto

    2015-01-01

    We discuss and illustrate Neural Blackboard Architectures (NBAs) as the basis for variable binding and combinatorial processing the brain. We focus on the NBA for sentence structure. NBAs are based on the notion that conceptual representations are in situ, hence cannot be copied or transported.

  17. Optimizing neural network forecast by immune algorithm

    Institute of Scientific and Technical Information of China (English)

    YANG Shu-xia; LI Xiang; LI Ning; YANG Shang-dong

    2006-01-01

    Considering multi-factor influence, a forecasting model was built. The structure of BP neural network was designed, and immune algorithm was applied to optimize its network structure and weight. After training the data of power demand from the year 1980 to 2005 in China, a nonlinear network model was obtained on the relationship between power demand and the factors which had impacts on it, and thus the above proposed method was verified. Meanwhile, the results were compared to those of neural network optimized by genetic algorithm. The results show that this method is superior to neural network optimized by genetic algorithm and is one of the effective ways of time series forecast.

  18. Optimising the topology of complex neural networks

    CERN Document Server

    Jiang, Fei; Schoenauer, Marc

    2007-01-01

    In this paper, we study instances of complex neural networks, i.e. neural netwo rks with complex topologies. We use Self-Organizing Map neural networks whose n eighbourhood relationships are defined by a complex network, to classify handwr itten digits. We show that topology has a small impact on performance and robus tness to neuron failures, at least at long learning times. Performance may howe ver be increased (by almost 10%) by artificial evolution of the network topo logy. In our experimental conditions, the evolved networks are more random than their parents, but display a more heterogeneous degree distribution.

  19. Mobile opportunistic networks architectures, protocols and applications

    CERN Document Server

    Denko, Mieso K

    2011-01-01

    Widespread availability of pervasive and mobile devices coupled with recent advances in networking technologies make opportunistic networks one of the most promising communication technologies for a growing number of future mobile applications. Covering the basics as well as advanced concepts, this book introduces state-of-the-art research findings, technologies, tools, and innovations. Prominent researchers from academia and industry report on communication architectures, network algorithms and protocols, emerging applications, experimental studies, simulation tools, implementation test beds,

  20. A new formulation for feedforward neural networks.

    Science.gov (United States)

    Razavi, Saman; Tolson, Bryan A

    2011-10-01

    Feedforward neural network is one of the most commonly used function approximation techniques and has been applied to a wide variety of problems arising from various disciplines. However, neural networks are black-box models having multiple challenges/difficulties associated with training and generalization. This paper initially looks into the internal behavior of neural networks and develops a detailed interpretation of the neural network functional geometry. Based on this geometrical interpretation, a new set of variables describing neural networks is proposed as a more effective and geometrically interpretable alternative to the traditional set of network weights and biases. Then, this paper develops a new formulation for neural networks with respect to the newly defined variables; this reformulated neural network (ReNN) is equivalent to the common feedforward neural network but has a less complex error response surface. To demonstrate the learning ability of ReNN, in this paper, two training methods involving a derivative-based (a variation of backpropagation) and a derivative-free optimization algorithms are employed. Moreover, a new measure of regularization on the basis of the developed geometrical interpretation is proposed to evaluate and improve the generalization ability of neural networks. The value of the proposed geometrical interpretation, the ReNN approach, and the new regularization measure are demonstrated across multiple test problems. Results show that ReNN can be trained more effectively and efficiently compared to the common neural networks and the proposed regularization measure is an effective indicator of how a network would perform in terms of generalization.

  1. Satellite ATM Networks: Architectures and Guidelines Developed

    Science.gov (United States)

    vonDeak, Thomas C.; Yegendu, Ferit

    1999-01-01

    An important element of satellite-supported asynchronous transfer mode (ATM) networking will involve support for the routing and rerouting of active connections. Work published under the auspices of the Telecommunications Industry Association (http://www.tiaonline.org), describes basic architectures and routing protocol issues for satellite ATM (SATATM) networks. The architectures and issues identified will serve as a basis for further development of technical specifications for these SATATM networks. Three ATM network architectures for bent pipe satellites and three ATM network architectures for satellites with onboard ATM switches were developed. The architectures differ from one another in terms of required level of mobility, supported data rates, supported terrestrial interfaces, and onboard processing and switching requirements. The documentation addresses low-, middle-, and geosynchronous-Earth-orbit satellite configurations. The satellite environment may require real-time routing to support the mobility of end devices and nodes of the ATM network itself. This requires the network to be able to reroute active circuits in real time. In addition to supporting mobility, rerouting can also be used to (1) optimize network routing, (2) respond to changing quality-of-service requirements, and (3) provide a fault tolerance mechanism. Traffic management and control functions are necessary in ATM to ensure that the quality-of-service requirements associated with each connection are not violated and also to provide flow and congestion control functions. Functions related to traffic management were identified and described. Most of these traffic management functions will be supported by on-ground ATM switches, but in a hybrid terrestrial-satellite ATM network, some of the traffic management functions may have to be supported by the onboard satellite ATM switch. Future work is planned to examine the tradeoffs of placing traffic management functions onboard a satellite as

  2. A security architecture for health information networks.

    Science.gov (United States)

    Kailar, Rajashekar; Muralidhar, Vinod

    2007-10-11

    Health information network security needs to balance exacting security controls with practicality, and ease of implementation in today's healthcare enterprise. Recent work on 'nationwide health information network' architectures has sought to share highly confidential data over insecure networks such as the Internet. Using basic patterns of health network data flow and trust models to support secure communication between network nodes, we abstract network security requirements to a core set to enable secure inter-network data sharing. We propose a minimum set of security controls that can be implemented without needing major new technologies, but yet realize network security and privacy goals of confidentiality, integrity and availability. This framework combines a set of technology mechanisms with environmental controls, and is shown to be sufficient to counter commonly encountered network security threats adequately.

  3. Coherence resonance in bursting neural networks.

    Science.gov (United States)

    Kim, June Hoan; Lee, Ho Jun; Min, Cheol Hong; Lee, Kyoung J

    2015-10-01

    Synchronized neural bursts are one of the most noticeable dynamic features of neural networks, being essential for various phenomena in neuroscience, yet their complex dynamics are not well understood. With extrinsic electrical and optical manipulations on cultured neural networks, we demonstrate that the regularity (or randomness) of burst sequences is in many cases determined by a (few) low-dimensional attractor(s) working under strong neural noise. Moreover, there is an optimal level of noise strength at which the regularity of the interburst interval sequence becomes maximal-a phenomenon of coherence resonance. The experimental observations are successfully reproduced through computer simulations on a well-established neural network model, suggesting that the same phenomena may occur in many in vivo as well as in vitro neural networks.

  4. Architecture of a Personal Network Service Layer

    Science.gov (United States)

    Joosten, Rieks; den Hartog, Frank; Selgert, Franklin

    We describe a basic service architecture that extends the currently dominant device-oriented approach of Personal Networks (PNs). It specifies functionality for runtime selection and execution of appropriate service components available in the PN, resulting in a highly dynamic, personalized, and context-aware provisioning of PN services to the user. The architectural model clearly connects the responsibilities of the various business roles with the individual properties (resources) of the PN Entities involved.

  5. Neural network classification - A Bayesian interpretation

    Science.gov (United States)

    Wan, Eric A.

    1990-01-01

    The relationship between minimizing a mean squared error and finding the optimal Bayesian classifier is reviewed. This provides a theoretical interpretation for the process by which neural networks are used in classification. A number of confidence measures are proposed to evaluate the performance of the neural network classifier within a statistical framework.

  6. Adaptive Neurons For Artificial Neural Networks

    Science.gov (United States)

    Tawel, Raoul

    1990-01-01

    Training time decreases dramatically. In improved mathematical model of neural-network processor, temperature of neurons (in addition to connection strengths, also called weights, of synapses) varied during supervised-learning phase of operation according to mathematical formalism and not heuristic rule. Evidence that biological neural networks also process information at neuronal level.

  7. Isolated Speech Recognition Using Artificial Neural Networks

    Science.gov (United States)

    2007-11-02

    In this project Artificial Neural Networks are used as research tool to accomplish Automated Speech Recognition of normal speech. A small size...the first stage of this work are satisfactory and thus the application of artificial neural networks in conjunction with cepstral analysis in isolated word recognition holds promise.

  8. Neural Network Algorithm for Particle Loading

    Energy Technology Data Exchange (ETDEWEB)

    J. L. V. Lewandowski

    2003-04-25

    An artificial neural network algorithm for continuous minimization is developed and applied to the case of numerical particle loading. It is shown that higher-order moments of the probability distribution function can be efficiently renormalized using this technique. A general neural network for the renormalization of an arbitrary number of moments is given.

  9. Medical image analysis with artificial neural networks.

    Science.gov (United States)

    Jiang, J; Trundle, P; Ren, J

    2010-12-01

    Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging. Copyright © 2010 Elsevier Ltd. All rights reserved.

  10. Creativity in design and artificial neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Neocleous, C.C.; Esat, I.I. [Brunel Univ. Uxbridge (United Kingdom); Schizas, C.N. [Univ. of Cyprus, Nicosia (Cyprus)

    1996-12-31

    The creativity phase is identified as an integral part of the design phase. The characteristics of creative persons which are relevant to designing artificial neural networks manifesting aspects of creativity, are identified. Based on these identifications, a general framework of artificial neural network characteristics to implement such a goal are proposed.

  11. Neural Networks for Non-linear Control

    DEFF Research Database (Denmark)

    Sørensen, O.

    1994-01-01

    This paper describes how a neural network, structured as a Multi Layer Perceptron, is trained to predict, simulate and control a non-linear process.......This paper describes how a neural network, structured as a Multi Layer Perceptron, is trained to predict, simulate and control a non-linear process....

  12. Application of Neural Networks for Energy Reconstruction

    CERN Document Server

    Damgov, Jordan

    2002-01-01

    The possibility to use Neural Networks for reconstruction ofthe energy deposited in the calorimetry system of the CMS detector is investigated. It is shown that using feed-forward neural network, good linearity, Gaussian energy distribution and good energy resolution can be achieved. Significant improvement of the energy resolution and linearity is reached in comparison with other weighting methods for energy reconstruction.

  13. Neural Networks for Non-linear Control

    DEFF Research Database (Denmark)

    Sørensen, O.

    1994-01-01

    This paper describes how a neural network, structured as a Multi Layer Perceptron, is trained to predict, simulate and control a non-linear process.......This paper describes how a neural network, structured as a Multi Layer Perceptron, is trained to predict, simulate and control a non-linear process....

  14. Trajectory generation and modulation using dynamic neural networks.

    Science.gov (United States)

    Zegers, P; Sundareshan, M K

    2003-01-01

    Generation of desired trajectory behavior using neural networks involves a particularly challenging spatio-temporal learning problem. This paper introduces a novel solution, i.e., designing a dynamic system whose terminal behavior emulates a prespecified spatio-temporal pattern independently of its initial conditions. The proposed solution uses a dynamic neural network (DNN), a hybrid architecture that employs a recurrent neural network (RNN) in cascade with a nonrecurrent neural network (NRNN). The RNN generates a simple limit cycle, which the NRNN reshapes into the desired trajectory. This architecture is simple to train. A systematic synthesis procedure based on the design of relay control systems is developed for configuring an RNN that can produce a limit cycle of elementary complexity. It is further shown that a cascade arrangement of this RNN and an appropriately trained NRNN can emulate any desired trajectory behavior irrespective of its complexity. An interesting solution to the trajectory modulation problem, i.e., online modulation of the generated trajectories using external inputs, is also presented. Results of several experiments are included to demonstrate the capabilities and performance of the DNN in handling trajectory generation and modulation problems.

  15. Introduction to Concepts in Artificial Neural Networks

    Science.gov (United States)

    Niebur, Dagmar

    1995-01-01

    This introduction to artificial neural networks summarizes some basic concepts of computational neuroscience and the resulting models of artificial neurons. The terminology of biological and artificial neurons, biological and machine learning and neural processing is introduced. The concepts of supervised and unsupervised learning are explained with examples from the power system area. Finally, a taxonomy of different types of neurons and different classes of artificial neural networks is presented.

  16. Rule Extraction using Artificial Neural Networks

    CERN Document Server

    Kamruzzaman, S M

    2010-01-01

    Artificial neural networks have been successfully applied to a variety of business application problems involving classification and regression. Although backpropagation neural networks generally predict better than decision trees do for pattern classification problems, they are often regarded as black boxes, i.e., their predictions are not as interpretable as those of decision trees. In many applications, it is desirable to extract knowledge from trained neural networks so that the users can gain a better understanding of the solution. This paper presents an efficient algorithm to extract rules from artificial neural networks. We use two-phase training algorithm for backpropagation learning. In the first phase, the number of hidden nodes of the network is determined automatically in a constructive fashion by adding nodes one after another based on the performance of the network on training data. In the second phase, the number of relevant input units of the network is determined using pruning algorithm. The ...

  17. Glaucoma detection based on deep convolutional neural network.

    Science.gov (United States)

    Xiangyu Chen; Yanwu Xu; Damon Wing Kee Wong; Tien Yin Wong; Jiang Liu

    2015-08-01

    Glaucoma is a chronic and irreversible eye disease, which leads to deterioration in vision and quality of life. In this paper, we develop a deep learning (DL) architecture with convolutional neural network for automated glaucoma diagnosis. Deep learning systems, such as convolutional neural networks (CNNs), can infer a hierarchical representation of images to discriminate between glaucoma and non-glaucoma patterns for diagnostic decisions. The proposed DL architecture contains six learned layers: four convolutional layers and two fully-connected layers. Dropout and data augmentation strategies are adopted to further boost the performance of glaucoma diagnosis. Extensive experiments are performed on the ORIGA and SCES datasets. The results show area under curve (AUC) of the receiver operating characteristic curve in glaucoma detection at 0.831 and 0.887 in the two databases, much better than state-of-the-art algorithms. The method could be used for glaucoma detection.

  18. Neural network learning of optimal Kalman prediction and control

    CERN Document Server

    Linsker, Ralph

    2008-01-01

    Although there are many neural network (NN) algorithms for prediction and for control, and although methods for optimal estimation (including filtering and prediction) and for optimal control in linear systems were provided by Kalman in 1960 (with nonlinear extensions since then), there has been, to my knowledge, no NN algorithm that learns either Kalman prediction or Kalman control (apart from the special case of stationary control). Here we show how optimal Kalman prediction and control (KPC), as well as system identification, can be learned and executed by a recurrent neural network composed of linear-response nodes, using as input only a stream of noisy measurement data. The requirements of KPC appear to impose significant constraints on the allowed NN circuitry and signal flows. The NN architecture implied by these constraints bears certain resemblances to the local-circuit architecture of mammalian cerebral cortex. We discuss these resemblances, as well as caveats that limit our current ability to draw ...

  19. Measuring photometric redshifts using galaxy images and Deep Neural Networks

    Science.gov (United States)

    Hoyle, B.

    2016-07-01

    We propose a new method to estimate the photometric redshift of galaxies by using the full galaxy image in each measured band. This method draws from the latest techniques and advances in machine learning, in particular Deep Neural Networks. We pass the entire multi-band galaxy image into the machine learning architecture to obtain a redshift estimate that is competitive, in terms of the measured point prediction metrics, with the best existing standard machine learning techniques. The standard techniques estimate redshifts using post-processed features, such as magnitudes and colours, which are extracted from the galaxy images and are deemed to be salient by the user. This new method removes the user from the photometric redshift estimation pipeline. However we do note that Deep Neural Networks require many orders of magnitude more computing resources than standard machine learning architectures, and as such are only tractable for making predictions on datasets of size ≤50k before implementing parallelisation techniques.

  20. Cognitive optical networks: architectures and techniques

    Science.gov (United States)

    Grebeshkov, Alexander Y.

    2017-04-01

    This article analyzes architectures and techniques of the optical networks with taking into account the cognitive methodology based on continuous cycle "Observe-Orient-Plan-Decide-Act-Learn" and the ability of the cognitive systems adjust itself through an adaptive process by responding to new changes in the environment. Cognitive optical network architecture includes cognitive control layer with knowledge base for control of software-configurable devices as reconfigurable optical add-drop multiplexers, flexible optical transceivers, software-defined receivers. Some techniques for cognitive optical networks as flexible-grid technology, broker-oriented technique, machine learning are examined. Software defined optical network and integration of wireless and optical networks with radio over fiber technique and fiber-wireless technique in the context of cognitive technologies are discussed.

  1. International Conference on Artificial Neural Networks (ICANN)

    CERN Document Server

    Mladenov, Valeri; Kasabov, Nikola; Artificial Neural Networks : Methods and Applications in Bio-/Neuroinformatics

    2015-01-01

    The book reports on the latest theories on artificial neural networks, with a special emphasis on bio-neuroinformatics methods. It includes twenty-three papers selected from among the best contributions on bio-neuroinformatics-related issues, which were presented at the International Conference on Artificial Neural Networks, held in Sofia, Bulgaria, on September 10-13, 2013 (ICANN 2013). The book covers a broad range of topics concerning the theory and applications of artificial neural networks, including recurrent neural networks, super-Turing computation and reservoir computing, double-layer vector perceptrons, nonnegative matrix factorization, bio-inspired models of cell communities, Gestalt laws, embodied theory of language understanding, saccadic gaze shifts and memory formation, and new training algorithms for Deep Boltzmann Machines, as well as dynamic neural networks and kernel machines. It also reports on new approaches to reinforcement learning, optimal control of discrete time-delay systems, new al...

  2. Architecture for robust network design

    NARCIS (Netherlands)

    Immers, L.H.; Snelder, M.; Egeter, B.; Schrijver, J.

    2009-01-01

    The road network in the Netherlands and in many other countries is becoming more and more vulnerable. Small disturbances can cause major disruptions on large parts of the network. The costs of this vulnerability can add up to several billions of Euros in the future. In this paper we present a new ne

  3. Architecture for robust network design

    NARCIS (Netherlands)

    Immers, L.H.; Snelder, M.; Egeter, B.; Schrijver, J.

    2009-01-01

    The road network in the Netherlands and in many other countries is becoming more and more vulnerable. Small disturbances can cause major disruptions on large parts of the network. The costs of this vulnerability can add up to several billions of Euros in the future. In this paper we present a new

  4. Using Neural Networks for Click Prediction of Sponsored Search

    OpenAIRE

    Baqapuri, Afroze Ibrahim; Trofimov, Ilya

    2014-01-01

    Sponsored search is a multi-billion dollar industry and makes up a major source of revenue for search engines (SE). click-through-rate (CTR) estimation plays a crucial role for ads selection, and greatly affects the SE revenue, advertiser traffic and user experience. We propose a novel architecture for solving CTR prediction problem by combining artificial neural networks (ANN) with decision trees. First we compare ANN with respect to other popular machine learning models being used for this ...

  5. Quantum Entanglement in Neural Network States

    Science.gov (United States)

    Deng, Dong-Ling; Li, Xiaopeng; Das Sarma, S.

    2017-04-01

    Machine learning, one of today's most rapidly growing interdisciplinary fields, promises an unprecedented perspective for solving intricate quantum many-body problems. Understanding the physical aspects of the representative artificial neural-network states has recently become highly desirable in the applications of machine-learning techniques to quantum many-body physics. In this paper, we explore the data structures that encode the physical features in the network states by studying the quantum entanglement properties, with a focus on the restricted-Boltzmann-machine (RBM) architecture. We prove that the entanglement entropy of all short-range RBM states satisfies an area law for arbitrary dimensions and bipartition geometry. For long-range RBM states, we show by using an exact construction that such states could exhibit volume-law entanglement, implying a notable capability of RBM in representing quantum states with massive entanglement. Strikingly, the neural-network representation for these states is remarkably efficient, in the sense that the number of nonzero parameters scales only linearly with the system size. We further examine the entanglement properties of generic RBM states by randomly sampling the weight parameters of the RBM. We find that their averaged entanglement entropy obeys volume-law scaling, and the meantime strongly deviates from the Page entropy of the completely random pure states. We show that their entanglement spectrum has no universal part associated with random matrix theory and bears a Poisson-type level statistics. Using reinforcement learning, we demonstrate that RBM is capable of finding the ground state (with power-law entanglement) of a model Hamiltonian with a long-range interaction. In addition, we show, through a concrete example of the one-dimensional symmetry-protected topological cluster states, that the RBM representation may also be used as a tool to analytically compute the entanglement spectrum. Our results uncover the

  6. Wavelet Neural Networks for Adaptive Equalization

    Institute of Scientific and Technical Information of China (English)

    JIANGMinghu; DENGBeixing; GIELENGeorges; ZHANGBo

    2003-01-01

    A structure based on the Wavelet neural networks (WNNs) is proposed for nonlinear channel equalization in a digital communication system. The construction algorithm of the Minimum error probability (MEP) is presented and applied as a performance criterion to update the parameter matrix of wavelet networks. Our experimental results show that performance of the proposed wavelet networks based on equalizer can significantly improve the neural modeling accuracy, perform quite well in compensating the nonlinear distortion introduced by the channel, and outperform the conventional neural networks in signal to noise ratio and channel non-llnearity.

  7. On the deduction of galaxy abundances with evolutionary neural networks

    CERN Document Server

    Taylor, Michael

    2007-01-01

    A growing number of indicators are now being used with some confidence to measure the metallicity(Z) of photoionisation regions in planetary nebulae, galactic HII regions(GHIIRs), extra-galactic HII regions(EGHIIRs) and HII galaxies(HIIGs). However, a universal indicator valid also at high metallicities has yet to be found. Here, we report on a new artificial intelligence-based approach to determine metallicity indicators that shows promise for the provision of improved empirical fits. The method hinges on the application of an evolutionary neural network to observational emission line data. The network's DNA, encoded in its architecture, weights and neuron transfer functions, is evolved using a genetic algorithm. Furthermore, selection, operating on a set of 10 distinct neuron transfer functions, means that the empirical relation encoded in the network solution architecture is in functional rather than numerical form. Thus the network solutions provide an equation for the metallicity in terms of line ratios ...

  8. Direct Adaptive Aircraft Control Using Dynamic Cell Structure Neural Networks

    Science.gov (United States)

    Jorgensen, Charles C.

    1997-01-01

    A Dynamic Cell Structure (DCS) Neural Network was developed which learns topology representing networks (TRNS) of F-15 aircraft aerodynamic stability and control derivatives. The network is integrated into a direct adaptive tracking controller. The combination produces a robust adaptive architecture capable of handling multiple accident and off- nominal flight scenarios. This paper describes the DCS network and modifications to the parameter estimation procedure. The work represents one step towards an integrated real-time reconfiguration control architecture for rapid prototyping of new aircraft designs. Performance was evaluated using three off-line benchmarks and on-line nonlinear Virtual Reality simulation. Flight control was evaluated under scenarios including differential stabilator lock, soft sensor failure, control and stability derivative variations, and air turbulence.

  9. Neural networks for structural design - An integrated system implementation

    Science.gov (United States)

    Berke, Laszlo; Hafez, Wassim; Pao, Yoh-Han

    1992-01-01

    The development of powerful automated procedures to aid the creative designer is becoming increasingly critical for complex design tasks. In the work described here Artificial Neural Nets are applied to acquire structural analysis and optimization domain expertise. Based on initial instructions from the user an automated procedure generates random instances of structural analysis and/or optimization 'experiences' that cover a desired domain. It extracts training patterns from the created instances, constructs and trains an appropriate network architecture and checks the accuracy of net predictions. The final product is a trained neural net that can estimate analysis and/or optimization results instantaneously.

  10. Using Hybrid Algorithm to Improve Intrusion Detection in Multi Layer Feed Forward Neural Networks

    Science.gov (United States)

    Ray, Loye Lynn

    2014-01-01

    The need for detecting malicious behavior on a computer networks continued to be important to maintaining a safe and secure environment. The purpose of this study was to determine the relationship of multilayer feed forward neural network architecture to the ability of detecting abnormal behavior in networks. This involved building, training, and…

  11. Using Hybrid Algorithm to Improve Intrusion Detection in Multi Layer Feed Forward Neural Networks

    Science.gov (United States)

    Ray, Loye Lynn

    2014-01-01

    The need for detecting malicious behavior on a computer networks continued to be important to maintaining a safe and secure environment. The purpose of this study was to determine the relationship of multilayer feed forward neural network architecture to the ability of detecting abnormal behavior in networks. This involved building, training, and…

  12. Subspace learning of neural networks

    CERN Document Server

    Cheng Lv, Jian; Zhou, Jiliu

    2010-01-01

    PrefaceChapter 1. Introduction1.1 Introduction1.1.1 Linear Neural Networks1.1.2 Subspace Learning1.2 Subspace Learning Algorithms1.2.1 PCA Learning Algorithms1.2.2 MCA Learning Algorithms1.2.3 ICA Learning Algorithms1.3 Methods for Convergence Analysis1.3.1 SDT Method1.3.2 DCT Method1.3.3 DDT Method1.4 Block Algorithms1.5 Simulation Data Set and Notation1.6 ConclusionsChapter 2. PCA Learning Algorithms with Constants Learning Rates2.1 Oja's PCA Learning Algorithms2.1.1 The Algorithms2.1.2 Convergence Issue2.2 Invariant Sets2.2.1 Properties of Invariant Sets2.2.2 Conditions for Invariant Sets2.

  13. Integrated Network Architecture for NASA's Orion Missions

    Science.gov (United States)

    Bhasin, Kul B.; Hayden, Jeffrey L.; Sartwell, Thomas; Miller, Ronald A.; Hudiburg, John J.

    2008-01-01

    NASA is planning a series of short and long duration human and robotic missions to explore the Moon and then Mars. The series of missions will begin with a new crew exploration vehicle (called Orion) that will initially provide crew exchange and cargo supply support to the International Space Station (ISS) and then become a human conveyance for travel to the Moon. The Orion vehicle will be mounted atop the Ares I launch vehicle for a series of pre-launch tests and then launched and inserted into low Earth orbit (LEO) for crew exchange missions to the ISS. The Orion and Ares I comprise the initial vehicles in the Constellation system of systems that later includes Ares V, Earth departure stage, lunar lander, and other lunar surface systems for the lunar exploration missions. These key systems will enable the lunar surface exploration missions to be initiated in 2018. The complexity of the Constellation system of systems and missions will require a communication and navigation infrastructure to provide low and high rate forward and return communication services, tracking services, and ground network services. The infrastructure must provide robust, reliable, safe, sustainable, and autonomous operations at minimum cost while maximizing the exploration capabilities and science return. The infrastructure will be based on a network of networks architecture that will integrate NASA legacy communication, modified elements, and navigation systems. New networks will be added to extend communication, navigation, and timing services for the Moon missions. Internet protocol (IP) and network management systems within the networks will enable interoperability throughout the Constellation system of systems. An integrated network architecture has developed based on the emerging Constellation requirements for Orion missions. The architecture, as presented in this paper, addresses the early Orion missions to the ISS with communication, navigation, and network services over five

  14. Neural networks for damage identification

    Energy Technology Data Exchange (ETDEWEB)

    Paez, T.L.; Klenke, S.E.

    1997-11-01

    Efforts to optimize the design of mechanical systems for preestablished use environments and to extend the durations of use cycles establish a need for in-service health monitoring. Numerous studies have proposed measures of structural response for the identification of structural damage, but few have suggested systematic techniques to guide the decision as to whether or not damage has occurred based on real data. Such techniques are necessary because in field applications the environments in which systems operate and the measurements that characterize system behavior are random. This paper investigates the use of artificial neural networks (ANNs) to identify damage in mechanical systems. Two probabilistic neural networks (PNNs) are developed and used to judge whether or not damage has occurred in a specific mechanical system, based on experimental measurements. The first PNN is a classical type that casts Bayesian decision analysis into an ANN framework; it uses exemplars measured from the undamaged and damaged system to establish whether system response measurements of unknown origin come from the former class (undamaged) or the latter class (damaged). The second PNN establishes the character of the undamaged system in terms of a kernel density estimator of measures of system response; when presented with system response measures of unknown origin, it makes a probabilistic judgment whether or not the data come from the undamaged population. The physical system used to carry out the experiments is an aerospace system component, and the environment used to excite the system is a stationary random vibration. The results of damage identification experiments are presented along with conclusions rating the effectiveness of the approaches.

  15. Parallel multilayer perceptron neural network used for hyperspectral image classification

    Science.gov (United States)

    Garcia-Salgado, Beatriz P.; Ponomaryov, Volodymyr I.; Robles-Gonzalez, Marco A.

    2016-04-01

    This study is focused on time optimization for the classification problem presenting a comparison of five Artificial Neural Network Multilayer Perceptron (ANN-MLP) architectures. We use the Artificial Neural Network (ANN) because it allows to recognize patterns in data in a lower time rate. Time and classification accuracy are taken into account together for the comparison. According to time comparison, two paradigms in the computational field for each ANN-MLP architecture are analysed with three schemes. Firstly, sequential programming is applied by using a single CPU core. Secondly, parallel programming is employed over a multi-core CPU architecture. Finally, a programming model running on GPU architecture is implemented. Furthermore, the classification accuracy is compared between the proposed five ANN-MLP architectures and a state-of.the-art Support Vector Machine (SVM) with three classification frames: 50%,60% and 70% of the data set's observations are randomly selected to train the classifiers. Also, a visual comparison of the classified results is presented. The Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) criteria are also calculated to characterise visual perception. The images employed were acquired by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS), the Reflective Optics System Imaging Spectrometer (ROSIS) and the Hyperion sensor.

  16. Analog implementation of pulse-coupled neural networks.

    Science.gov (United States)

    Ota, Y; Wilamowski, B M

    1999-01-01

    This paper presents a compact architecture for analog CMOS hardware implementation of voltage-mode pulse-coupled neural networks (PCNN's). The hardware implementation methods shows inherent fault tolerance specialties and high speed, which is usually more than an order of magnitude over the software counterpart. A computational style described in this article mimics a biological neural network using pulse-stream signaling and analog summation and multiplication. Pulse-stream encoding technique uses pulse streams to carry information and control analog circuitry, while storing further analog information on the time axis. The main feature of the proposed neuron circuit is that the structure is compact, yet exhibiting all the basic properties of natural biological neurons. Functional and structural forms of neural and synaptic functions are presented along with simulation results. Finally, the proposed design is applied to image processing to demonstrate successful restoration of images and their features.

  17. Multi-column Deep Neural Networks for Image Classification

    CERN Document Server

    Cireşan, Dan; Schmidhuber, Juergen

    2012-01-01

    Traditional methods of computer vision and machine learning cannot match human performance on tasks such as the recognition of handwritten digits or traffic signs. Our biologically plausible deep artificial neural network architectures can. Small (often minimal) receptive fields of convolutional winner-take-all neurons yield large network depth, resulting in roughly as many sparsely connected neural layers as found in mammals between retina and visual cortex. Only winner neurons are trained. Several deep neural columns become experts on inputs preprocessed in different ways; their predictions are averaged. Graphics cards allow for fast training. On the very competitive MNIST handwriting benchmark, our method is the first to achieve near-human performance. On a traffic sign recognition benchmark it outperforms humans by a factor of two. We also improve the state-of-the-art on a plethora of common image classification benchmarks.

  18. Neural Network Combination by Fuzzy Integral for Robust Change Detection in Remotely Sensed Imagery

    OpenAIRE

    Nemmour Hassiba; Chibani Youcef

    2005-01-01

    Combining multiple neural networks has been used to improve the decision accuracy in many application fields including pattern recognition and classification. In this paper, we investigate the potential of this approach for land cover change detection. In a first step, we perform many experiments in order to find the optimal individual networks in terms of architecture and training rule. In the second step, different neural network change detectors are combined using a method based on the no...

  19. Nonlinear programming with feedforward neural networks.

    Energy Technology Data Exchange (ETDEWEB)

    Reifman, J.

    1999-06-02

    We provide a practical and effective method for solving constrained optimization problems by successively training a multilayer feedforward neural network in a coupled neural-network/objective-function representation. Nonlinear programming problems are easily mapped into this representation which has a simpler and more transparent method of solution than optimization performed with Hopfield-like networks and poses very mild requirements on the functions appearing in the problem. Simulation results are illustrated and compared with an off-the-shelf optimization tool.

  20. Learning Processes of Layered Neural Networks

    OpenAIRE

    Fujiki, Sumiyoshi; FUJIKI, Nahomi, M.

    1995-01-01

    A positive reinforcement type learning algorithm is formulated for a stochastic feed-forward neural network, and a learning equation similar to that of the Boltzmann machine algorithm is obtained. By applying a mean field approximation to the same stochastic feed-forward neural network, a deterministic analog feed-forward network is obtained and the back-propagation learning rule is re-derived.

  1. Learning Algorithms of Multilayer Neural Networks

    OpenAIRE

    Fujiki, Sumiyoshi; FUJIKI, Nahomi, M.

    1996-01-01

    A positive reinforcement type learning algorithm is formulated for a stochastic feed-forward multilayer neural network, with far interlayer synaptic connections, and we obtain a learning rule similar to that of the Boltzmann machine on the same multilayer structure. By applying a mean field approximation to the stochastic feed-forward neural network, the generalized error back-propagation learning rule is derived for a deterministic analog feed-forward multilayer network with the far interlay...

  2. A SIMO Fiber Aided Wireless Network Architecture

    OpenAIRE

    Ray, Siddharth; Medard, Muriel; Zheng, Lizhong

    2006-01-01

    The concept of a fiber aided wireless network architecture (FAWNA) is introduced in [Ray et al., Allerton Conference 2005], which allows high-speed mobile connectivity by leveraging the speed of optical networks. In this paper, we consider a single-input, multiple-output (SIMO) FAWNA, which consists of a SIMO wireless channel and an optical fiber channel, connected through wireless-optical interfaces. We propose a scheme where the received wireless signal at each interface is quantized and se...

  3. Image segmentation using neural tree networks

    Science.gov (United States)

    Samaddar, Sumitro; Mammone, Richard J.

    1993-06-01

    We present a technique for Image Segmentation using Neural Tree Networks (NTN). We also modify the NTN architecture to let is solve multi-class classification problems with only binary fan-out. We have used a realistic case study of segmenting the pole, coil and painted coil regions of light bulb filaments (LBF). The input to the network is a set of maximum, minimum and average of intensities in radial slices of a circular window around a pixel, taken from a front-lit and a back-lit image of an LBF. Training is done with a composite image drawn from images of many LBFs. Each node of the NTN is a multi-layer perceptron and has one output for each segment class. These outputs are treated as probabilities to compute a confidence value for the segmentation of that pixel. Segmentation results with high confidence values are deemed to be correct and not processed further, while those with moderate and low confidence values are deemed to be outliers by this node and passed down the tree to children nodes. These tend to be pixels in boundary of different regions. The results are favorably compared with a traditional segmentation technique applied to the LBF test case.

  4. Neural tree network method for image segmentation

    Science.gov (United States)

    Samaddar, Sumitro; Mammone, Richard J.

    1994-02-01

    We present an extension of the neural tree network (NTN) architecture to let it solve multi- class classification problems with only binary fan-out. We then demonstrate it's effectiveness by applying it in a method for image segmentation. Each node of the NTN is a multi-layer perceptron and has one output for each segment class. These outputs are treated as probabilities to compute a confidence value for the segmentation of that pixel. Segmentation results with high confidence values are deemed to be correct and not processed further, while those with moderate and low confidence values are deemed to be outliers by this node and passed down the tree to children nodes. These tend to be pixels in boundary of different regions. We have used a realistic case study of segmenting the pole, coil and painted coil regions of light bulb filaments (LBF). The input to the network is a set of maximum, minimum and average of intensities in radial slices of a circular window around a pixel, taken from a front-lit and a back-lit image of an LBF. Training is done with a composite image drawn from images of many LBFs. The results are favorably compared with a traditional segmentation technique applied to the LBF test case.

  5. Acute appendicitis diagnosis using artificial neural networks.

    Science.gov (United States)

    Park, Sung Yun; Kim, Sung Min

    2015-01-01

    Artificial neural networks is one of pattern analyzer method which are rapidly applied on a bio-medical field. The aim of this research was to propose an appendicitis diagnosis system using artificial neural networks (ANNs). Data from 801 patients of the university hospital in Dongguk were used to construct artificial neural networks for diagnosing appendicitis and acute appendicitis. A radial basis function neural network structure (RBF), a multilayer neural network structure (MLNN), and a probabilistic neural network structure (PNN) were used for artificial neural network models. The Alvarado clinical scoring system was used for comparison with the ANNs. The accuracy of the RBF, PNN, MLNN, and Alvarado was 99.80%, 99.41%, 97.84%, and 72.19%, respectively. The area under ROC (receiver operating characteristic) curve of RBF, PNN, MLNN, and Alvarado was 0.998, 0.993, 0.985, and 0.633, respectively. The proposed models using ANNs for diagnosing appendicitis showed good performances, and were significantly better than the Alvarado clinical scoring system (p < 0.001). With cooperation among facilities, the accuracy for diagnosing this serious health condition can be improved.

  6. A security architecture for personal networks

    NARCIS (Netherlands)

    Jehangir, Assed; Heemstra de Groot, Sonia M.

    2006-01-01

    Abstract Personal Network (PN) is a new concept utilizing pervasive computing to meet the needs of the user. As PNs edge closer towards reality, security becomes an important concern since any vulnerability in the system will limit its practical use. In this paper we introduce a security architectur

  7. Architecture of a Personal Network service layer

    NARCIS (Netherlands)

    Joosten, H.J.M.; Hartog, F.T.H. den; Selgert, F.

    2009-01-01

    We describe a basic service architecture that extends the currently dominant device-oriented approach of Personal Networks (PNs). It specifies functionality for runtime selection and execution of appropriate service components available in the PN, resulting in a highly dynamic, personalized, and

  8. Mobility Prediction in Wireless Ad Hoc Networks using Neural Networks

    CERN Document Server

    Kaaniche, Heni

    2010-01-01

    Mobility prediction allows estimating the stability of paths in a mobile wireless Ad Hoc networks. Identifying stable paths helps to improve routing by reducing the overhead and the number of connection interruptions. In this paper, we introduce a neural network based method for mobility prediction in Ad Hoc networks. This method consists of a multi-layer and recurrent neural network using back propagation through time algorithm for training.

  9. A Reconfigurable and Biologically Inspired Paradigm for Computation Using Network-On-Chip and Spiking Neural Networks

    Directory of Open Access Journals (Sweden)

    Jim Harkin

    2009-01-01

    Full Text Available FPGA devices have emerged as a popular platform for the rapid prototyping of biological Spiking Neural Networks (SNNs applications, offering the key requirement of reconfigurability. However, FPGAs do not efficiently realise the biologically plausible neuron and synaptic models of SNNs, and current FPGA routing structures cannot accommodate the high levels of interneuron connectivity inherent in complex SNNs. This paper highlights and discusses the current challenges of implementing scalable SNNs on reconfigurable FPGAs. The paper proposes a novel field programmable neural network architecture (EMBRACE, incorporating low-power analogue spiking neurons, interconnected using a Network-on-Chip architecture. Results on the evaluation of the EMBRACE architecture using the XOR benchmark problem are presented, and the performance of the architecture is discussed. The paper also discusses the adaptability of the EMBRACE architecture in supporting fault tolerant computing.

  10. Neural network regulation driven by autonomous neural firings

    Science.gov (United States)

    Cho, Myoung Won

    2016-07-01

    Biological neurons naturally fire spontaneously due to the existence of a noisy current. Such autonomous firings may provide a driving force for network formation because synaptic connections can be modified due to neural firings. Here, we study the effect of autonomous firings on network formation. For the temporally asymmetric Hebbian learning, bidirectional connections lose their balance easily and become unidirectional ones. Defining the difference between reciprocal connections as new variables, we could express the learning dynamics as if Ising model spins interact with each other in magnetism. We present a theoretical method to estimate the interaction between the new variables in a neural system. We apply the method to some network systems and find some tendencies of autonomous neural network regulation.

  11. The architecture of functional interaction networks in the retina.

    Science.gov (United States)

    Ganmor, Elad; Segev, Ronen; Schneidman, Elad

    2011-02-23

    Sensory information is represented in the brain by the joint activity of large groups of neurons. Recent studies have shown that, although the number of possible activity patterns and underlying interactions is exponentially large, pairwise-based models give a surprisingly accurate description of neural population activity patterns. We explored the architecture of maximum entropy models of the functional interaction networks underlying the response of large populations of retinal ganglion cells, in adult tiger salamander retina, responding to natural and artificial stimuli. We found that we can further simplify these pairwise models by neglecting weak interaction terms or by relying on a small set of interaction strengths. Comparing network interactions under different visual stimuli, we show the existence of local network motifs in the interaction map of the retina. Our results demonstrate that the underlying interaction map of the retina is sparse and dominated by local overlapping interaction modules.

  12. MIRAI Architecture for Heterogeneous Network

    NARCIS (Netherlands)

    Wu, Gang; Mizuno, Mitsuhiko; Havinga, Paul J.M.

    One of the keywords that describe next-generation wireless communications is "seamless." As part of the e-Japan Plan promoted by the Japanese Government, the Multimedia Integrated Network by Radio Access Innovation project has as its goal the development of new technologies to enable seamless

  13. Genetic algorithm for neural networks optimization

    Science.gov (United States)

    Setyawati, Bina R.; Creese, Robert C.; Sahirman, Sidharta

    2004-11-01

    This paper examines the forecasting performance of multi-layer feed forward neural networks in modeling a particular foreign exchange rates, i.e. Japanese Yen/US Dollar. The effects of two learning methods, Back Propagation and Genetic Algorithm, in which the neural network topology and other parameters fixed, were investigated. The early results indicate that the application of this hybrid system seems to be well suited for the forecasting of foreign exchange rates. The Neural Networks and Genetic Algorithm were programmed using MATLAB«.

  14. Neural networks techniques applied to reservoir engineering

    Energy Technology Data Exchange (ETDEWEB)

    Flores, M. [Gerencia de Proyectos Geotermoelectricos, Morelia (Mexico); Barragan, C. [RockoHill de Mexico, Indiana (Mexico)

    1995-12-31

    Neural Networks are considered the greatest technological advance since the transistor. They are expected to be a common household item by the year 2000. An attempt to apply Neural Networks to an important geothermal problem has been made, predictions on the well production and well completion during drilling in a geothermal field. This was done in Los Humeros geothermal field, using two common types of Neural Network models, available in commercial software. Results show the learning capacity of the developed model, and its precision in the predictions that were made.

  15. Assessing Landslide Hazard Using Artificial Neural Network

    DEFF Research Database (Denmark)

    Farrokhzad, Farzad; Choobbasti, Asskar Janalizadeh; Barari, Amin

    2011-01-01

    neural network has been developed for use in the stability evaluation of slopes under various geological conditions and engineering requirements. The Artificial neural network model of this research uses slope characteristics as input and leads to the output in form of the probability of failure...... and factor of safety. It can be stated that the trained neural networks are capable of predicting the stability of slopes and safety factor of landslide hazard in study area with an acceptable level of confidence. Landslide hazard analysis and mapping can provide useful information for catastrophic loss...

  16. Estimation of Conditional Quantile using Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1999-01-01

    The problem of estimating conditional quantiles using neural networks is investigated here. A basic structure is developed using the methodology of kernel estimation, and a theory guaranteeing con-sistency on a mild set of assumptions is provided. The constructed structure constitutes a basis...... for the design of a variety of different neural networks, some of which are considered in detail. The task of estimating conditional quantiles is related to Bayes point estimation whereby a broad range of applications within engineering, economics and management can be suggested. Numerical results illustrating...... the capabilities of the elaborated neural network are also given....

  17. Estimation of Conditional Quantile using Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1999-01-01

    The problem of estimating conditional quantiles using neural networks is investigated here. A basic structure is developed using the methodology of kernel estimation, and a theory guaranteeing con-sistency on a mild set of assumptions is provided. The constructed structure constitutes a basis...... for the design of a variety of different neural networks, some of which are considered in detail. The task of estimating conditional quantiles is related to Bayes point estimation whereby a broad range of applications within engineering, economics and management can be suggested. Numerical results illustrating...... the capabilities of the elaborated neural network are also given....

  18. Convolutional Neural Network for Image Recognition

    CERN Document Server

    Seifnashri, Sahand

    2015-01-01

    The aim of this project is to use machine learning techniques especially Convolutional Neural Networks for image processing. These techniques can be used for Quark-Gluon discrimination using calorimeters data, but unfortunately I didn’t manage to get the calorimeters data and I just used the Jet data fromminiaodsim(ak4 chs). The Jet data was not good enough for Convolutional Neural Network which is designed for ’image’ recognition. This report is made of twomain part, part one is mainly about implementing Convolutional Neural Network on unphysical data such as MNIST digits and CIFAR-10 dataset and part 2 is about the Jet data.

  19. Threshold control of chaotic neural network.

    Science.gov (United States)

    He, Guoguang; Shrimali, Manish Dev; Aihara, Kazuyuki

    2008-01-01

    The chaotic neural network constructed with chaotic neurons exhibits rich dynamic behaviour with a nonperiodic associative memory. In the chaotic neural network, however, it is difficult to distinguish the stored patterns in the output patterns because of the chaotic state of the network. In order to apply the nonperiodic associative memory into information search, pattern recognition etc. it is necessary to control chaos in the chaotic neural network. We have studied the chaotic neural network with threshold activated coupling, which provides a controlled network with associative memory dynamics. The network converges to one of its stored patterns or/and reverse patterns which has the smallest Hamming distance from the initial state of the network. The range of the threshold applied to control the neurons in the network depends on the noise level in the initial pattern and decreases with the increase of noise. The chaos control in the chaotic neural network by threshold activated coupling at varying time interval provides controlled output patterns with different temporal periods which depend upon the control parameters.

  20. Prune-able fuzzy ART neural architecture for robot map learning and navigation in dynamic environments.

    Science.gov (United States)

    Araújo, Rui

    2006-09-01

    Mobile robots must be able to build their own maps to navigate in unknown worlds. Expanding a previously proposed method based on the fuzzy ART neural architecture (FARTNA), this paper introduces a new online method for learning maps of unknown dynamic worlds. For this purpose the new Prune-able fuzzy adaptive resonance theory neural architecture (PAFARTNA) is introduced. It extends the FARTNA self-organizing neural network with novel mechanisms that provide important dynamic adaptation capabilities. Relevant PAFARTNA properties are formulated and demonstrated. A method is proposed for the perception of object removals, and then integrated with PAFARTNA. The proposed methods are integrated into a navigation architecture. With the new navigation architecture the mobile robot is able to navigate in changing worlds, and a degree of optimality is maintained, associated to a shortest path planning approach implemented in real-time over the underlying global world model. Experimental results obtained with a Nomad 200 robot are presented demonstrating the feasibility and effectiveness of the proposed methods.

  1. Neural network payload estimation for adaptive robot control.

    Science.gov (United States)

    Leahy, M R; Johnson, M A; Rogers, S K

    1991-01-01

    A concept is proposed for utilizing artificial neural networks to enhance the high-speed tracking accuracy of robotic manipulators. Tracking accuracy is a function of the controller's ability to compensate for disturbances produced by dynamical interactions between the links. A model-based control algorithm uses a nominal model of those dynamical interactions to reduce the disturbances. The problem is how to provide accurate dynamics information to the controller in the presence of payload uncertainty and modeling error. Neural network payload estimation uses a series of artificial neural networks to recognize the payload variation associated with a degradation in tracking performance. The network outputs are combined with a knowledge of nominal dynamics to produce a computationally efficient direct form of adaptive control. The concept is validated through experimentation and analysis on the first three links of a PUMA-560 manipulator. A multilayer perceptron architecture with two hidden layers is used. Integration of the principles of neural network pattern recognition and model-based control produces a tracking algorithm with enhanced robustness to incomplete dynamic information. Tracking efficacy and applicability to robust control algorithms are discussed.

  2. Recurrent Neural Network for Computing the Drazin Inverse.

    Science.gov (United States)

    Stanimirović, Predrag S; Zivković, Ivan S; Wei, Yimin

    2015-11-01

    This paper presents a recurrent neural network (RNN) for computing the Drazin inverse of a real matrix in real time. This recurrent neural network (RNN) is composed of n independent parts (subnetworks), where n is the order of the input matrix. These subnetworks can operate concurrently, so parallel and distributed processing can be achieved. In this way, the computational advantages over the existing sequential algorithms can be attained in real-time applications. The RNN defined in this paper is convenient for an implementation in an electronic circuit. The number of neurons in the neural network is the same as the number of elements in the output matrix, which represents the Drazin inverse. The difference between the proposed RNN and the existing ones for the Drazin inverse computation lies in their network architecture and dynamics. The conditions that ensure the stability of the defined RNN as well as its convergence toward the Drazin inverse are considered. In addition, illustrative examples and examples of application to the practical engineering problems are discussed to show the efficacy of the proposed neural network.

  3. Time-Delay Neural Network for Smart MIMO Channel Estimation in Downlink 4G-LTE-Advance System

    OpenAIRE

    Nirmalkumar S. Reshamwala; Pooja S. Suratia; Satish K. Shah

    2014-01-01

    Long-Term Evolution (LTE) is the next generation of current mobile telecommunication networks. LTE has a new flat radio-network architecture and significant increase in spectrum efficiency. In this paper, main focus on throughput performance analysis of robust MIMO channel estimators for Downlink Long Term Evolution-Advance (DL LTE-A)-4G system using three Artificial Neural Networks: Feed-forward neural network (FFNN), Cascade-forward neural network (CFNN) and Time-Delay neural network (TDNN) a...

  4. Nonequilibrium landscape theory of neural networks.

    Science.gov (United States)

    Yan, Han; Zhao, Lei; Hu, Liang; Wang, Xidi; Wang, Erkang; Wang, Jin

    2013-11-05

    The brain map project aims to map out the neuron connections of the human brain. Even with all of the wirings mapped out, the global and physical understandings of the function and behavior are still challenging. Hopfield quantified the learning and memory process of symmetrically connected neural networks globally through equilibrium energy. The energy basins of attractions represent memories, and the memory retrieval dynamics is determined by the energy gradient. However, the realistic neural networks are asymmetrically connected, and oscillations cannot emerge from symmetric neural networks. Here, we developed a nonequilibrium landscape-flux theory for realistic asymmetrically connected neural networks. We uncovered the underlying potential landscape and the associated Lyapunov function for quantifying the global stability and function. We found the dynamics and oscillations in human brains responsible for cognitive processes and physiological rhythm regulations are determined not only by the landscape gradient but also by the flux. We found that the flux is closely related to the degrees of the asymmetric connections in neural networks and is the origin of the neural oscillations. The neural oscillation landscape shows a closed-ring attractor topology. The landscape gradient attracts the network down to the ring. The flux is responsible for coherent oscillations on the ring. We suggest the flux may provide the driving force for associations among memories. We applied our theory to rapid-eye movement sleep cycle. We identified the key regulation factors for function through global sensitivity analysis of landscape topography against wirings, which are in good agreements with experiments.

  5. Nonequilibrium landscape theory of neural networks

    Science.gov (United States)

    Yan, Han; Zhao, Lei; Hu, Liang; Wang, Xidi; Wang, Erkang; Wang, Jin

    2013-01-01

    The brain map project aims to map out the neuron connections of the human brain. Even with all of the wirings mapped out, the global and physical understandings of the function and behavior are still challenging. Hopfield quantified the learning and memory process of symmetrically connected neural networks globally through equilibrium energy. The energy basins of attractions represent memories, and the memory retrieval dynamics is determined by the energy gradient. However, the realistic neural networks are asymmetrically connected, and oscillations cannot emerge from symmetric neural networks. Here, we developed a nonequilibrium landscape–flux theory for realistic asymmetrically connected neural networks. We uncovered the underlying potential landscape and the associated Lyapunov function for quantifying the global stability and function. We found the dynamics and oscillations in human brains responsible for cognitive processes and physiological rhythm regulations are determined not only by the landscape gradient but also by the flux. We found that the flux is closely related to the degrees of the asymmetric connections in neural networks and is the origin of the neural oscillations. The neural oscillation landscape shows a closed-ring attractor topology. The landscape gradient attracts the network down to the ring. The flux is responsible for coherent oscillations on the ring. We suggest the flux may provide the driving force for associations among memories. We applied our theory to rapid-eye movement sleep cycle. We identified the key regulation factors for function through global sensitivity analysis of landscape topography against wirings, which are in good agreements with experiments. PMID:24145451

  6. Approach to design neural cryptography: a generalized architecture and a heuristic rule.

    Science.gov (United States)

    Mu, Nankun; Liao, Xiaofeng; Huang, Tingwen

    2013-06-01

    Neural cryptography, a type of public key exchange protocol, is widely considered as an effective method for sharing a common secret key between two neural networks on public channels. How to design neural cryptography remains a great challenge. In this paper, in order to provide an approach to solve this challenge, a generalized network architecture and a significant heuristic rule are designed. The proposed generic framework is named as tree state classification machine (TSCM), which extends and unifies the existing structures, i.e., tree parity machine (TPM) and tree committee machine (TCM). Furthermore, we carefully study and find that the heuristic rule can improve the security of TSCM-based neural cryptography. Therefore, TSCM and the heuristic rule can guide us to designing a great deal of effective neural cryptography candidates, in which it is possible to achieve the more secure instances. Significantly, in the light of TSCM and the heuristic rule, we further expound that our designed neural cryptography outperforms TPM (the most secure model at present) on security. Finally, a series of numerical simulation experiments are provided to verify validity and applicability of our results.

  7. Mutual information in a dilute, asymmetric neural network model

    Science.gov (United States)

    Greenfield, Elliot

    We study the computational properties of a neural network consisting of binary neurons with dilute asymmetric synaptic connections. This simple model allows us to simulate large networks which can reflect more of the architecture and dynamics of real neural networks. Our main goal is to determine the dynamical behavior that maximizes the network's ability to perform computations. To this end, we apply information theory, measuring the average mutual information between pairs of pre- and post-synaptic neurons. Communication of information between neurons is an essential requirement for collective computation. Previous workers have demonstrated that neural networks with asymmetric connections undergo a transition from ordered to chaotic behavior as certain network parameters, such as the connectivity, are changed. We find that the average mutual information has a peak near the order-chaos transition, implying that the network can most efficiently communicate information between cells in this region. The mutual information peak becomes increasingly pronounced when the basic model is extended to incorporate more biologically realistic features, such as a variable threshold and nonlinear summation of inputs. We find that the peak in mutual information near the phase transition is a robust feature of the system for a wide range of assumptions about post-synaptic integration.

  8. Character Recognition Using Novel Optoelectronic Neural Network

    Science.gov (United States)

    1993-04-01

    17 2.3.7. Learning rule ................................................................... 18 3. ADALINE ... ADALINE neuron and linear separability which provides a justification for multilayer networks. The MADALINE (many ADALINE ) multi layer network is also...element used In many neural networks (Figure 3.1). The ADALINE functions as an adaptive threshold logic element. In digital Implementation, an input

  9. Neural Network for Estimating Conditional Distribution

    DEFF Research Database (Denmark)

    Schiøler, Henrik; Kulczycki, P.

    Neural networks for estimating conditional distributions and their associated quantiles are investigated in this paper. A basic network structure is developed on the basis of kernel estimation theory, and consistency is proved from a mild set of assumptions. A number of applications within...... statistcs, decision theory and signal processing are suggested, and a numerical example illustrating the capabilities of the elaborated network is given...

  10. Satellite Networks: Architectures, Applications, and Technologies

    Science.gov (United States)

    Bhasin, Kul (Compiler)

    1998-01-01

    Since global satellite networks are moving to the forefront in enhancing the national and global information infrastructures due to communication satellites' unique networking characteristics, a workshop was organized to assess the progress made to date and chart the future. This workshop provided the forum to assess the current state-of-the-art, identify key issues, and highlight the emerging trends in the next-generation architectures, data protocol development, communication interoperability, and applications. Presentations on overview, state-of-the-art in research, development, deployment and applications and future trends on satellite networks are assembled.

  11. Grid architecture model of network centric warfare

    Institute of Scientific and Technical Information of China (English)

    Yan Tihua; Wang Baoshu

    2006-01-01

    NCW(network centric warfare) is an information warfare concentrating on network. A global network-centric warfare architecture with OGSA grid technology is put forward, which is a four levels system including the user level, the application level, the grid middleware layer and the resource level. In grid middleware layer, based on virtual hosting environment, a BEPL4WS grid service composition method is introduced. In addition, the NCW grid service model is built with the help of Eclipse-SDK-3.0.1 and Bpws4j.

  12. Fast notification architecture for wireless sensor networks

    Science.gov (United States)

    Lee, Dong-Hahk

    2013-03-01

    In an emergency, since it is vital to transmit the message to the users immediately after analysing the data to prevent disaster, this article presents the deployment of a fast notification architecture for a wireless sensor network. The sensor nodes of the proposed architecture can monitor an emergency situation periodically and transmit the sensing data, immediately to the sink node. We decide on the grade of fire situation according to the decision rule using the sensing values of temperature, CO, smoke density and temperature increasing rate. On the other hand, to estimate the grade of air pollution, the sensing data, such as dust, formaldehyde, NO2, CO2, is applied to the given knowledge model. Since the sink node in the architecture has a ZigBee interface, it can transmit the alert messages in real time according to analysed results received from the host server to the terminals equipped with a SIM card-type ZigBee module. Also, the host server notifies the situation to the registered users who have cellular phone through short message service server of the cellular network. Thus, the proposed architecture can adapt an emergency situation dynamically compared to the conventional architecture using video processing. In the testbed, after generating air pollution and fire data, the terminal receives the message in less than 3 s. In the test results, this system can also be applied to buildings and public areas where many people gather together, to prevent unexpected disasters in urban settings.

  13. Detection of Denial of Service Attacks against Domain Name System Using Neural Networks

    Directory of Open Access Journals (Sweden)

    Mohd Fadlee A. Rasid

    2009-11-01

    Full Text Available In this paper we introduce an intrusion detection system for Denial of Service (DoS attacks against Domain Name System (DNS. Our system architecture consists of two most important parts: a statistical preprocessor and a neural network classifier. The preprocessor extracts required statistical features in a short-time frame from traffic received by the target name server. We compared three different neural networks for detecting and classifying different types of DoS attacks. The proposed system is evaluated in a simulated network and showed that the best performed neural network is a feed-forward backpropagation with an accuracy of 99%.

  14. An Artificial Neural Network for Data Forecasting Purposes

    Directory of Open Access Journals (Sweden)

    Catalina Lucia COCIANU

    2015-01-01

    Full Text Available Considering the fact that markets are generally influenced by different external factors, the stock market prediction is one of the most difficult tasks of time series analysis. The research reported in this paper aims to investigate the potential of artificial neural networks (ANN in solving the forecast task in the most general case, when the time series are non-stationary. We used a feed-forward neural architecture: the nonlinear autoregressive network with exogenous inputs. The network training function used to update the weight and bias parameters corresponds to gradient descent with adaptive learning rate variant of the backpropagation algorithm. The results obtained using this technique are compared with the ones resulted from some ARIMA models. We used the mean square error (MSE measure to evaluate the performances of these two models. The comparative analysis leads to the conclusion that the proposed model can be successfully applied to forecast the financial data.

  15. Multiview fusion for activity recognition using deep neural networks

    Science.gov (United States)

    Kavi, Rahul; Kulathumani, Vinod; Rohit, Fnu; Kecojevic, Vlad

    2016-07-01

    Convolutional neural networks (ConvNets) coupled with long short term memory (LSTM) networks have been recently shown to be effective for video classification as they combine the automatic feature extraction capabilities of a neural network with additional memory in the temporal domain. This paper shows how multiview fusion can be applied to such a ConvNet LSTM architecture. Two different fusion techniques are presented. The system is first evaluated in the context of a driver activity recognition system using data collected in a multicamera driving simulator. These results show significant improvement in accuracy with multiview fusion and also show that deep learning performs better than a traditional approach using spatiotemporal features even without requiring any background subtraction. The system is also validated on another publicly available multiview action recognition dataset that has 12 action classes and 8 camera views.

  16. Processing directed acyclic graphs with recursive neural networks.

    Science.gov (United States)

    Bianchini, M; Gori, M; Scarselli, F

    2001-01-01

    Recursive neural networks are conceived for processing graphs and extend the well-known recurrent model for processing sequences. In Frasconi et al. (1998), recursive neural networks can deal only with directed ordered acyclic graphs (DOAGs), in which the children of any given node are ordered. While this assumption is reasonable in some applications, it introduces unnecessary constraints in others. In this paper, it is shown that the constraint on the ordering can be relaxed by using an appropriate weight sharing, that guarantees the independence of the network output with respect to the permutations of the arcs leaving from each node. The method can be used with graphs having low connectivity and, in particular, few outcoming arcs. Some theoretical properties of the proposed architecture are given. They guarantee that the approximation capabilities are maintained, despite the weight sharing.

  17. Nonlinear System Control Using Neural Networks

    Directory of Open Access Journals (Sweden)

    Jaroslava Žilková

    2006-10-01

    Full Text Available The paper is focused especially on presenting possibilities of applying off-linetrained artificial neural networks at creating the system inverse models that are used atdesigning control algorithm for non-linear dynamic system. The ability of cascadefeedforward neural networks to model arbitrary non-linear functions and their inverses isexploited. This paper presents a quasi-inverse neural model, which works as a speedcontroller of an induction motor. The neural speed controller consists of two cascadefeedforward neural networks subsystems. The first subsystem provides desired statorcurrent components for control algorithm and the second subsystem providescorresponding voltage components for PWM converter. The availability of the proposedcontroller is verified through the MATLAB simulation. The effectiveness of the controller isdemonstrated for different operating conditions of the drive system.

  18. 1991 IEEE International Joint Conference on Neural Networks, Singapore, Nov. 18-21, 1991, Proceedings. Vols. 1-3

    Energy Technology Data Exchange (ETDEWEB)

    1991-01-01

    The present conference the application of neural networks to associative memories, neurorecognition, hybrid systems, supervised and unsupervised learning, image processing, neurophysiology, sensation and perception, electrical neurocomputers, optimization, robotics, machine vision, sensorimotor control systems, and neurodynamics. Attention is given to such topics as optimal associative mappings in recurrent networks, self-improving associative neural network models, fuzzy activation functions, adaptive pattern recognition with sparse associative networks, efficient question-answering in a hybrid system, the use of abstractions by neural networks, remote-sensing pattern classification, speech recognition with guided propagation, inverse-step competitive learning, and rotational quadratic function neural networks. Also discussed are electrical load forecasting, evolutionarily stable and unstable strategies, the capacity of recurrent networks, neural net vs control theory, perceptrons for image recognition, storage capacity of bidirectional associative memories, associative random optimization for control, automatic synthesis of digital neural architectures, self-learning robot vision, and the associative dynamics of chaotic neural networks.

  19. Recognition of Telugu characters using neural networks.

    Science.gov (United States)

    Sukhaswami, M B; Seetharamulu, P; Pujari, A K

    1995-09-01

    The aim of the present work is to recognize printed and handwritten Telugu characters using artificial neural networks (ANNs). Earlier work on recognition of Telugu characters has been done using conventional pattern recognition techniques. We make an initial attempt here of using neural networks for recognition with the aim of improving upon earlier methods which do not perform effectively in the presence of noise and distortion in the characters. The Hopfield model of neural network working as an associative memory is chosen for recognition purposes initially. Due to limitation in the capacity of the Hopfield neural network, we propose a new scheme named here as the Multiple Neural Network Associative Memory (MNNAM). The limitation in storage capacity has been overcome by combining multiple neural networks which work in parallel. It is also demonstrated that the Hopfield network is suitable for recognizing noisy printed characters as well as handwritten characters written by different "hands" in a variety of styles. Detailed experiments have been carried out using several learning strategies and results are reported. It is shown here that satisfactory recognition is possible using the proposed strategy. A detailed preprocessing scheme of the Telugu characters from digitized documents is also described.

  20. An Introduction to Neural Networks for Hearing Aid Noise Recognition.

    Science.gov (United States)

    Kim, Jun W.; Tyler, Richard S.

    1995-01-01

    This article introduces the use of multilayered artificial neural networks in hearing aid noise recognition. It reviews basic principles of neural networks, and offers an example of an application in which a neural network is used to identify the presence or absence of noise in speech. The ability of neural networks to "learn" the…

  1. Neural Networks for Dynamic Flight Control

    Science.gov (United States)

    1993-12-01

    uses the Adaline (22) model for development of the neural networks. Neural Graphics and other AFIT applications use a slightly different model. The...primary difference in the Nguyen application is that the Adaline uses the nonlinear function .f(a) = tanh(a) where standard backprop uses the sigmoid

  2. Neural networks convergence using physicochemical data.

    Science.gov (United States)

    Karelson, Mati; Dobchev, Dimitar A; Kulshyn, Oleksandr V; Katritzky, Alan R

    2006-01-01

    An investigation of the neural network convergence and prediction based on three optimization algorithms, namely, Levenberg-Marquardt, conjugate gradient, and delta rule, is described. Several simulated neural networks built using the above three algorithms indicated that the Levenberg-Marquardt optimizer implemented as a back-propagation neural network converged faster than the other two algorithms and provides in most of the cases better prediction. These conclusions are based on eight physicochemical data sets, each with a significant number of compounds comparable to that usually used in the QSAR/QSPR modeling. The superiority of the Levenberg-Marquardt algorithm is revealed in terms of functional dependence of the change of the neural network weights with respect to the gradient of the error propagation as well as distribution of the weight values. The prediction of the models is assessed by the error of the validation sets not used in the training process.

  3. Application of neural networks in coastal engineering

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.

    methods. That is why it is becoming popular in various fields including coastal engineering. Waves and tides will play important roles in coastal erosion or accretion. This paper briefly describes the back-propagation neural networks and its application...

  4. Neural Network Based 3D Surface Reconstruction

    Directory of Open Access Journals (Sweden)

    Vincy Joseph

    2009-11-01

    Full Text Available This paper proposes a novel neural-network-based adaptive hybrid-reflectance three-dimensional (3-D surface reconstruction model. The neural network combines the diffuse and specular components into a hybrid model. The proposed model considers the characteristics of each point and the variant albedo to prevent the reconstructed surface from being distorted. The neural network inputs are the pixel values of the two-dimensional images to be reconstructed. The normal vectors of the surface can then be obtained from the output of the neural network after supervised learning, where the illuminant direction does not have to be known in advance. Finally, the obtained normal vectors can be applied to integration method when reconstructing 3-D objects. Facial images were used for training in the proposed approach

  5. Control of autonomous robot using neural networks

    Science.gov (United States)

    Barton, Adam; Volna, Eva

    2017-07-01

    The aim of the article is to design a method of control of an autonomous robot using artificial neural networks. The introductory part describes control issues from the perspective of autonomous robot navigation and the current mobile robots controlled by neural networks. The core of the article is the design of the controlling neural network, and generation and filtration of the training set using ART1 (Adaptive Resonance Theory). The outcome of the practical part is an assembled Lego Mindstorms EV3 robot solving the problem of avoiding obstacles in space. To verify models of an autonomous robot behavior, a set of experiments was created as well as evaluation criteria. The speed of each motor was adjusted by the controlling neural network with respect to the situation in which the robot was found.

  6. Additive Feed Forward Control with Neural Networks

    DEFF Research Database (Denmark)

    Sørensen, O.

    1999-01-01

    This paper demonstrates a method to control a non-linear, multivariable, noisy process using trained neural networks. The basis for the method is a trained neural network controller acting as the inverse process model. A training method for obtaining such an inverse process model is applied....... A suitable 'shaped' (low-pass filtered) reference is used to overcome problems with excessive control action when using a controller acting as the inverse process model. The control concept is Additive Feed Forward Control, where the trained neural network controller, acting as the inverse process model......, is placed in a supplementary pure feed-forward path to an existing feedback controller. This concept benefits from the fact, that an existing, traditional designed, feedback controller can be retained without any modifications, and after training the connection of the neural network feed-forward controller...

  7. TIME SERIES FORECASTING USING NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    BOGDAN OANCEA

    2013-05-01

    Full Text Available Recent studies have shown the classification and prediction power of the Neural Networks. It has been demonstrated that a NN can approximate any continuous function. Neural networks have been successfully used for forecasting of financial data series. The classical methods used for time series prediction like Box-Jenkins or ARIMA assumes that there is a linear relationship between inputs and outputs. Neural Networks have the advantage that can approximate nonlinear functions. In this paper we compared the performances of different feed forward and recurrent neural networks and training algorithms for predicting the exchange rate EUR/RON and USD/RON. We used data series with daily exchange rates starting from 2005 until 2013.

  8. Additive Feed Forward Control with Neural Networks

    DEFF Research Database (Denmark)

    Sørensen, O.

    1999-01-01

    This paper demonstrates a method to control a non-linear, multivariable, noisy process using trained neural networks. The basis for the method is a trained neural network controller acting as the inverse process model. A training method for obtaining such an inverse process model is applied....... A suitable 'shaped' (low-pass filtered) reference is used to overcome problems with excessive control action when using a controller acting as the inverse process model. The control concept is Additive Feed Forward Control, where the trained neural network controller, acting as the inverse process model......, is placed in a supplementary pure feed-forward path to an existing feedback controller. This concept benefits from the fact, that an existing, traditional designed, feedback controller can be retained without any modifications, and after training the connection of the neural network feed-forward controller...

  9. Artificial neural network and medicine.

    Science.gov (United States)

    Khan, Z H; Mohapatra, S K; Khodiar, P K; Ragu Kumar, S N

    1998-07-01

    The introduction of human brain functions such as perception and cognition into the computer has been made possible by the use of Artificial Neural Network (ANN). ANN are computer models inspired by the structure and behavior of neurons. Like the brain, ANN can recognize patterns, manage data and most significantly, learn. This learning ability, not seen in other computer models simulating human intelligence, constantly improves its functional accuracy as it keeps on performing. Experience is as important for an ANN as it is for man. It is being increasingly used to supplement and even (may be) replace experts, in medicine. However, there is still scope for improvement in some areas. Its ability to classify and interpret various forms of medical data comes as a helping hand to clinical decision making in both diagnosis and treatment. Treatment planning in medicine, radiotherapy, rehabilitation, etc. is being done using ANN. Morbidity and mortality prediction by ANN in different medical situations can be very helpful for hospital management. ANN has a promising future in fundamental research, medical education and surgical robotics.

  10. Neural network for image segmentation

    Science.gov (United States)

    Skourikhine, Alexei N.; Prasad, Lakshman; Schlei, Bernd R.

    2000-10-01

    Image analysis is an important requirement of many artificial intelligence systems. Though great effort has been devoted to inventing efficient algorithms for image analysis, there is still much work to be done. It is natural to turn to mammalian vision systems for guidance because they are the best known performers of visual tasks. The pulse- coupled neural network (PCNN) model of the cat visual cortex has proven to have interesting properties for image processing. This article describes the PCNN application to the processing of images of heterogeneous materials; specifically PCNN is applied to image denoising and image segmentation. Our results show that PCNNs do well at segmentation if we perform image smoothing prior to segmentation. We use PCNN for obth smoothing and segmentation. Combining smoothing and segmentation enable us to eliminate PCNN sensitivity to the setting of the various PCNN parameters whose optimal selection can be difficult and can vary even for the same problem. This approach makes image processing based on PCNN more automatic in our application and also results in better segmentation.

  11. Pattern Recognition Using Neural Networks

    Directory of Open Access Journals (Sweden)

    Santaji Ghorpade

    2010-12-01

    Full Text Available Face Recognition has been identified as one of the attracting research areas and it has drawn the attention of many researchers due to its varying applications such as security systems, medical systems,entertainment, etc. Face recognition is the preferred mode of identification by humans: it is natural,robust and non-intrusive. A wide variety of systems requires reliable personal recognition schemes to either confirm or determine the identity of an individual requesting their services. The purpose of such schemes is to ensure that the rendered services are accessed only by a legitimate user and no one else.Examples of such applications include secure access to buildings, computer systems, laptops, cellular phones, and ATMs. In the absence of robust personal recognition schemes, these systems are vulnerable to the wiles of an impostor.In this paper we have developed and illustrated a recognition system for human faces using a novel Kohonen self-organizing map (SOM or Self-Organizing Feature Map (SOFM based retrieval system.SOM has good feature extracting property due to its topological ordering. The Facial Analytics results for the 400 images of AT&T database reflects that the face recognition rate using one of the neural network algorithm SOM is 85.5% for 40 persons.

  12. Applications of Pulse-Coupled Neural Networks

    CERN Document Server

    Ma, Yide; Wang, Zhaobin

    2011-01-01

    "Applications of Pulse-Coupled Neural Networks" explores the fields of image processing, including image filtering, image segmentation, image fusion, image coding, image retrieval, and biometric recognition, and the role of pulse-coupled neural networks in these fields. This book is intended for researchers and graduate students in artificial intelligence, pattern recognition, electronic engineering, and computer science. Prof. Yide Ma conducts research on intelligent information processing, biomedical image processing, and embedded system development at the School of Information Sci

  13. Neural network models of protein domain evolution

    OpenAIRE

    Sylvia Nagl

    2000-01-01

    Protein domains are complex adaptive systems, and here a novel procedure is presented that models the evolution of new functional sites within stable domain folds using neural networks. Neural networks, which were originally developed in cognitive science for the modeling of brain functions, can provide a fruitful methodology for the study of complex systems in general. Ethical implications of developing complex systems models of biomolecules are discussed, with particular reference to molecu...

  14. Architectures of fiber optic network in telecommunications

    Science.gov (United States)

    Vasile, Irina B.; Vasile, Alexandru; Filip, Luminita E.

    2005-08-01

    The operators of telecommunications have targeted their efforts towards realizing applications using broad band fiber optics systems in the access network. Thus, a new concept related to the implementation of fiber optic transmission systems, named FITL (Fiber In The Loop) has appeared. The fiber optic transmission systems have been extensively used for realizing the transport and intercommunication of the public telecommunication network, as well as for assuring the access to the telecommunication systems of the great corporations. Still, the segment of the residential users and small corporations did not benefit on large scale of this technology implementation. For the purpose of defining fiber optic applications, more types of architectures were conceived, like: bus, ring, star, tree. In the case of tree-like networks passive splitters (that"s where the name of PON comes from - Passive Optical Network-), which reduce significantly the costs of the fiber optic access, by separating the costs of the optical electronic components. That's why the passive fiber optics architectures (PON represent a viable solution for realizing the access at the user's loop. The main types of fiber optics architectures included in this work are: FTTC (Fiber To The Curb); FTTB (Fiber To The Building); FTTH (Fiber To The Home).

  15. An Analysis of the Performance of Artificial Neural Network Technique for Stock Market Forecasting

    Directory of Open Access Journals (Sweden)

    Dr. Ashutosh Kumar Bhatt

    2010-09-01

    Full Text Available In this paper, we showed a method to forecast the daily stock price using neural networks and the result of the Neural Network forecast is compared with the Statistical forecasting result. Stock price prediction is one of the emerging field in neural network forecastingarea. This paper also presents the Neural Networks ability to forecast the daily Stock Market Prices. Stock market prediction is very difficult since it depends on several known and unknown factors while the Artificial Neural Network is a popular technique for the stock market Forecasting. The Neural Network is based on the conceptof ‘Learn by Example’. In this paper, Neural Networks and Statistical techniques are employed to model and forecast the daily stock market prices and then the results of these two models are compared. The forecasting ability of these two models is accessed using MAPE, MSE and RMSE. The results show that Neural Networks, when trained with sufficient data, proper inputs and with proper architecture, can predict the stock market prices very well. Statistical technique though well built but their forecasting ability is reduced as the series become complex. Therefore, Neural Networks can be used as an better alternative technique for forecasting the daily stock market prices.

  16. Neural network segmentation of magnetic resonance images

    Science.gov (United States)

    Frederick, Blaise

    1990-07-01

    Neural networks are well adapted to the task of grouping input patterns into subsets which share some similarity. Moreover once trained they can generalize their classification rules to classify new data sets. Sets of pixel intensities from magnetic resonance (MR) images provide a natural input to a neural network by varying imaging parameters MR images can reflect various independent physical parameters of tissues in their pixel intensities. A neural net can then be trained to classify physically similar tissue types based on sets of pixel intensities resulting from different imaging studies on the same subject. A neural network classifier for image segmentation was implemented on a Sun 4/60 and was tested on the task of classifying tissues of canine head MR images. Four images of a transaxial slice with different imaging sequences were taken as input to the network (three spin-echo images and an inversion recovery image). The training set consisted of 691 representative samples of gray matter white matter cerebrospinal fluid bone and muscle preclassified by a neuroscientist. The network was trained using a fast backpropagation algorithm to derive the decision criteria to classify any location in the image by its pixel intensities and the image was subsequently segmented by the classifier. The classifier''s performance was evaluated as a function of network size number of network layers and length of training. A single layer neural network performed quite well at

  17. A fuzzy neural network for intelligent data processing

    Science.gov (United States)

    Xie, Wei; Chu, Feng; Wang, Lipo; Lim, Eng Thiam

    2005-03-01

    In this paper, we describe an incrementally generated fuzzy neural network (FNN) for intelligent data processing. This FNN combines the features of initial fuzzy model self-generation, fast input selection, partition validation, parameter optimization and rule-base simplification. A small FNN is created from scratch -- there is no need to specify the initial network architecture, initial membership functions, or initial weights. Fuzzy IF-THEN rules are constantly combined and pruned to minimize the size of the network while maintaining accuracy; irrelevant inputs are detected and deleted, and membership functions and network weights are trained with a gradient descent algorithm, i.e., error backpropagation. Experimental studies on synthesized data sets demonstrate that the proposed Fuzzy Neural Network is able to achieve accuracy comparable to or higher than both a feedforward crisp neural network, i.e., NeuroRule, and a decision tree, i.e., C4.5, with more compact rule bases for most of the data sets used in our experiments. The FNN has achieved outstanding results for cancer classification based on microarray data. The excellent classification result for Small Round Blue Cell Tumors (SRBCTs) data set is shown. Compared with other published methods, we have used a much fewer number of genes for perfect classification, which will help researchers directly focus their attention on some specific genes and may lead to discovery of deep reasons of the development of cancers and discovery of drugs.

  18. Parameter estimation in space systems using recurrent neural networks

    Science.gov (United States)

    Parlos, Alexander G.; Atiya, Amir F.; Sunkel, John W.

    1991-01-01

    The identification of time-varying parameters encountered in space systems is addressed, using artificial neural systems. A hybrid feedforward/feedback neural network, namely a recurrent multilayer perception, is used as the model structure in the nonlinear system identification. The feedforward portion of the network architecture provides its well-known interpolation property, while through recurrency and cross-talk, the local information feedback enables representation of temporal variations in the system nonlinearities. The standard back-propagation-learning algorithm is modified and it is used for both the off-line and on-line supervised training of the proposed hybrid network. The performance of recurrent multilayer perceptron networks in identifying parameters of nonlinear dynamic systems is investigated by estimating the mass properties of a representative large spacecraft. The changes in the spacecraft inertia are predicted using a trained neural network, during two configurations corresponding to the early and late stages of the spacecraft on-orbit assembly sequence. The proposed on-line mass properties estimation capability offers encouraging results, though, further research is warranted for training and testing the predictive capabilities of these networks beyond nominal spacecraft operations.

  19. Logarithmic learning for generalized classifier neural network.

    Science.gov (United States)

    Ozyildirim, Buse Melis; Avci, Mutlu

    2014-12-01

    Generalized classifier neural network is introduced as an efficient classifier among the others. Unless the initial smoothing parameter value is close to the optimal one, generalized classifier neural network suffers from convergence problem and requires quite a long time to converge. In this work, to overcome this problem, a logarithmic learning approach is proposed. The proposed method uses logarithmic cost function instead of squared error. Minimization of this cost function reduces the number of iterations used for reaching the minima. The proposed method is tested on 15 different data sets and performance of logarithmic learning generalized classifier neural network is compared with that of standard one. Thanks to operation range of radial basis function included by generalized classifier neural network, proposed logarithmic approach and its derivative has continuous values. This makes it possible to adopt the advantage of logarithmic fast convergence by the proposed learning method. Due to fast convergence ability of logarithmic cost function, training time is maximally decreased to 99.2%. In addition to decrease in training time, classification performance may also be improved till 60%. According to the test results, while the proposed method provides a solution for time requirement problem of generalized classifier neural network, it may also improve the classification accuracy. The proposed method can be considered as an efficient way for reducing the time requirement problem of generalized classifier neural network. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Diabetic retinopathy screening using deep neural network.

    Science.gov (United States)

    Ramachandran, Nishanthan; Chiong, Hong Sheng; Sime, Mary Jane; Wilson, Graham A

    2017-09-07

    Importance There is a burgeoning interest in the use of deep neural network in diabetic retinal screening. To determine whether a deep neural network could satisfactorily detect diabetic retinopathy that requires referral to an ophthalmologist from a local diabetic retinal screening programme and an international database. Design Retrospective audit Samples Diabetic retinal photos from Otago database photographed during October 2016 (485 photos); and 1200 photos from Messidor international database. Receiver operating characteristic curve to illustrate the ability of a deep neural network to identify referable diabetic retinopathy (moderate or worse diabetic retinopathy or exudates within one disc diameter of the fovea). Main Outcome Measures Area under the receiver operating characteristic curve, sensitivity and specificity RESULTS: For detecting referable diabetic retinopathy, the deep neural network had an area under receiver operating characteristic curve of 0.901 (95% CI, 0.807-0.995) with 84.6% sensitivity and 79.7% specificity for Otago and 0.980 (95% CI, 0.973-0.986) with 96.0% sensitivity and 90.0% specificity for Messidor. Conclusions and Relevance This study has shown that a deep neural network can detect referable diabetic retinopathy with sensitivities and specificities close to or better than 80% from both an international and a domestic (New Zealand) database. We believe that deep neural networks can be integrated into community screening once they can successfully detect both diabetic retinopathy and diabetic macular oedema. This article is protected by copyright. All rights reserved.

  1. Architecture and Algorithms for an Airborne Network

    CERN Document Server

    Sen, Arunabha; Silva, Tiffany; Das, Nibedita; Kundu, Anjan

    2010-01-01

    The U.S. Air Force currently is in the process of developing an Airborne Network (AN) to provide support to its combat aircrafts on a mission. The reliability needed for continuous operation of an AN is difficult to achieve through completely infrastructure-less mobile ad hoc networks. In this paper we first propose an architecture for an AN where airborne networking platforms (ANPs - aircrafts, UAVs and satellites) form the backbone of the AN. In this architecture, the ANPs can be viewed as mobile base stations and the combat aircrafts on a mission as mobile clients. The combat aircrafts on a mission move through a space called air corridor. The goal of the AN design is to form a backbone network with the ANPs with two properties: (i) the backbone network remains connected at all times, even though the topology of the network changes with the movement of the ANPs, and (ii) the entire 3D space of the air corridor is under radio coverage at all times by the continuously moving ANPs. In addition to proposing an...

  2. Hopfield neural network based on ant system

    Institute of Scientific and Technical Information of China (English)

    洪炳镕; 金飞虎; 郭琦

    2004-01-01

    Hopfield neural network is a single layer feedforward neural network. Hopfield network requires some control parameters to be carefully selected, else the network is apt to converge to local minimum. An ant system is a nature inspired meta heuristic algorithm. It has been applied to several combinatorial optimization problems such as Traveling Salesman Problem, Scheduling Problems, etc. This paper will show an ant system may be used in tuning the network control parameters by a group of cooperated ants. The major advantage of this network is to adjust the network parameters automatically, avoiding a blind search for the set of control parameters.This network was tested on two TSP problems, 5 cities and 10 cities. The results have shown an obvious improvement.

  3. Iterative free-energy optimization for recurrent neural networks (INFERNO).

    Science.gov (United States)

    Pitti, Alexandre; Gaussier, Philippe; Quoy, Mathias

    2017-01-01

    The intra-parietal lobe coupled with the Basal Ganglia forms a working memory that demonstrates strong planning capabilities for generating robust yet flexible neuronal sequences. Neurocomputational models however, often fails to control long range neural synchrony in recurrent spiking networks due to spontaneous activity. As a novel framework based on the free-energy principle, we propose to see the problem of spikes' synchrony as an optimization problem of the neurons sub-threshold activity for the generation of long neuronal chains. Using a stochastic gradient descent, a reinforcement signal (presumably dopaminergic) evaluates the quality of one input vector to move the recurrent neural network to a desired activity; depending on the error made, this input vector is strengthened to hill-climb the gradient or elicited to search for another solution. This vector can be learned then by one associative memory as a model of the basal-ganglia to control the recurrent neural network. Experiments on habit learning and on sequence retrieving demonstrate the capabilities of the dual system to generate very long and precise spatio-temporal sequences, above two hundred iterations. Its features are applied then to the sequential planning of arm movements. In line with neurobiological theories, we discuss its relevance for modeling the cortico-basal working memory to initiate flexible goal-directed neuronal chains of causation and its relation to novel architectures such as Deep Networks, Neural Turing Machines and the Free-Energy Principle.

  4. Predicting Physical Time Series Using Dynamic Ridge Polynomial Neural Networks

    Science.gov (United States)

    Al-Jumeily, Dhiya; Ghazali, Rozaida; Hussain, Abir

    2014-01-01

    Forecasting naturally occurring phenomena is a common problem in many domains of science, and this has been addressed and investigated by many scientists. The importance of time series prediction stems from the fact that it has wide range of applications, including control systems, engineering processes, environmental systems and economics. From the knowledge of some aspects of the previous behaviour of the system, the aim of the prediction process is to determine or predict its future behaviour. In this paper, we consider a novel application of a higher order polynomial neural network architecture called Dynamic Ridge Polynomial Neural Network that combines the properties of higher order and recurrent neural networks for the prediction of physical time series. In this study, four types of signals have been used, which are; The Lorenz attractor, mean value of the AE index, sunspot number, and heat wave temperature. The simulation results showed good improvements in terms of the signal to noise ratio in comparison to a number of higher order and feedforward neural networks in comparison to the benchmarked techniques. PMID:25157950

  5. Iterative free-energy optimization for recurrent neural networks (INFERNO)

    Science.gov (United States)

    2017-01-01

    The intra-parietal lobe coupled with the Basal Ganglia forms a working memory that demonstrates strong planning capabilities for generating robust yet flexible neuronal sequences. Neurocomputational models however, often fails to control long range neural synchrony in recurrent spiking networks due to spontaneous activity. As a novel framework based on the free-energy principle, we propose to see the problem of spikes’ synchrony as an optimization problem of the neurons sub-threshold activity for the generation of long neuronal chains. Using a stochastic gradient descent, a reinforcement signal (presumably dopaminergic) evaluates the quality of one input vector to move the recurrent neural network to a desired activity; depending on the error made, this input vector is strengthened to hill-climb the gradient or elicited to search for another solution. This vector can be learned then by one associative memory as a model of the basal-ganglia to control the recurrent neural network. Experiments on habit learning and on sequence retrieving demonstrate the capabilities of the dual system to generate very long and precise spatio-temporal sequences, above two hundred iterations. Its features are applied then to the sequential planning of arm movements. In line with neurobiological theories, we discuss its relevance for modeling the cortico-basal working memory to initiate flexible goal-directed neuronal chains of causation and its relation to novel architectures such as Deep Networks, Neural Turing Machines and the Free-Energy Principle. PMID:28282439

  6. Defining a neural network controller structure for a rubbertuator robot.

    Science.gov (United States)

    Ozkan, M; Inoue, K; Negishi, K; Yamanaka, T

    2000-01-01

    Rubbertuator (Rubber-Actuator) robot arm is a pneumatic robot, unique with its lightweight, high power, compliant and spark free nature. Compressibility of air in the actuator tubes and the elastic nature of the rubber, however, are the two major sources of increased non-linearity and complexity in motion control. Soft computing, exploiting the tolerance of uncertainty and vagueness in cognitive reasoning has been offering easy to handle, robust, and low-priced solutions to several non-linear industrial applications. Nonetheless, the black-box approach in these systems results in application specific architectures with some important design parameters left for fine tuning (i.e. number of nodes in a neural network). In this study we propose a more systematic method in defining the structure of a soft computing technique, namely the backpropagation neural network, when used as a controller for rubbertuator robot systems. The structure of the neural network is based on the physical model of the robot, while the neural network itself is trained to learn the trajectory independent parameters of the model that are essential for defining the robot dynamics. The proposed system performance was compared with a well-tuned PID controller and shown to be more accurate in trajectory control for rubbertuator robots.

  7. Predicting physical time series using dynamic ridge polynomial neural networks.

    Directory of Open Access Journals (Sweden)

    Dhiya Al-Jumeily

    Full Text Available Forecasting naturally occurring phenomena is a common problem in many domains of science, and this has been addressed and investigated by many scientists. The importance of time series prediction stems from the fact that it has wide range of applications, including control systems, engineering processes, environmental systems and economics. From the knowledge of some aspects of the previous behaviour of the system, the aim of the prediction process is to determine or predict its future behaviour. In this paper, we consider a novel application of a higher order polynomial neural network architecture called Dynamic Ridge Polynomial Neural Network that combines the properties of higher order and recurrent neural networks for the prediction of physical time series. In this study, four types of signals have been used, which are; The Lorenz attractor, mean value of the AE index, sunspot number, and heat wave temperature. The simulation results showed good improvements in terms of the signal to noise ratio in comparison to a number of higher order and feedforward neural networks in comparison to the benchmarked techniques.

  8. Classification of handwritten digits using a RAM neural net architecture

    DEFF Research Database (Denmark)

    Jørgensen, T.M.

    1997-01-01

    Results are reported on the task of recognizing handwritten digits without any advanced pre-processing. The result are obtained using a RAM-based neural network, making use of small receptive fields. Furthermore, a technique that introduces negative weights into the RAM net is reported. The results...

  9. Artificial Neural Network System for Thyroid Diagnosis

    Directory of Open Access Journals (Sweden)

    Mazin Abdulrasool Hameed

    2017-05-01

    Full Text Available Thyroid disease is one of major causes of severe medical problems for human beings. Therefore, proper diagnosis of thyroid disease is considered as an important issue to determine treatment for patients. This paper focuses on using Artificial Neural Network (ANN as a significant technique of artificial intelligence to diagnose thyroid diseases. The continuous values of three laboratory blood tests are used as input signals to the proposed system of ANN. All types of thyroid diseases that may occur in patients are taken into account in design of system, as well as the high accuracy of the detection and categorization of thyroid diseases are considered in the system. A multilayer feedforward architecture of ANN is adopted in the proposed design, and the back propagation is selected as learning algorithm to accomplish the training process. The result of this research shows that the proposed ANN system is able to precisely diagnose thyroid disease, and can be exploited in practical uses. The system is simulated via MATLAB software to evaluate its performance

  10. Neural network implementation using bit streams.

    Science.gov (United States)

    Patel, Nitish D; Nguang, Sing Kiong; Coghill, George G

    2007-09-01

    A new method for the parallel hardware implementation of artificial neural networks (ANNs) using digital techniques is presented. Signals are represented using uniformly weighted single-bit streams. Techniques for generating bit streams from analog or multibit inputs are also presented. This single-bit representation offers significant advantages over multibit representations since they mitigate the fan-in and fan-out issues which are typical to distributed systems. To process these bit streams using ANNs concepts, functional elements which perform summing, scaling, and squashing have been implemented. These elements are modular and have been designed such that they can be easily interconnected. Two new architectures which act as monotonically increasing differentiable nonlinear squashing functions have also been presented. Using these functional elements, a multilayer perceptron (MLP) can be easily constructed. Two examples successfully demonstrate the use of bit streams in the implementation of ANNs. Since every functional element is individually instantiated, the implementation is genuinely parallel. The results clearly show that this bit-stream technique is viable for the hardware implementation of a variety of distributed systems and for ANNs in particular.

  11. A Network Software Architecture Suitable for Service Customization

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    This paper explores the service customization in the view of network software architectures. The authors first abstract a network system into a framework that consists of several layered basic systems and then propose a component-based network software architecture for one basic system of network software, which is suitable for service customization. The network software architecture is formalized with the theory of Communicating Sequential Process and show its possible applications in areas of personal service environment and service customization.

  12. Hidden neural networks: application to speech recognition

    DEFF Research Database (Denmark)

    Riis, Søren Kamaric

    1998-01-01

    We evaluate the hidden neural network HMM/NN hybrid on two speech recognition benchmark tasks; (1) task independent isolated word recognition on the Phonebook database, and (2) recognition of broad phoneme classes in continuous speech from the TIMIT database. It is shown how hidden neural networks...... (HNNs) with much fewer parameters than conventional HMMs and other hybrids can obtain comparable performance, and for the broad class task it is illustrated how the HNN can be applied as a purely transition based system, where acoustic context dependent transition probabilities are estimated by neural...

  13. Matrix representation of a Neural Network

    DEFF Research Database (Denmark)

    Christensen, Bjørn Klint

    This paper describes the implementation of a three-layer feedforward backpropagation neural network. The paper does not explain feedforward, backpropagation or what a neural network is. It is assumed, that the reader knows all this. If not please read chapters 2, 8 and 9 in Parallel Distributed...... Processing, by David Rummelhart (Rummelhart 1986) for an easy-to-read introduction. What the paper does explain is how a matrix representation of a neural net allows for a very simple implementation. The matrix representation is introduced in (Rummelhart 1986, chapter 9), but only for a two-layer linear...... network and the feedforward algorithm. This paper develops the idea further to three-layer non-linear networks and the backpropagation algorithm. Figure 1 shows the layout of a three-layer network. There are I input nodes, J hidden nodes and K output nodes all indexed from 0. Bias-node for the hidden...

  14. Application of Partially Connected Neural Network

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    This paper focuses mainly on application of Partially Connected Backpropagation Neural Network (PCBP) instead of typical Fully Connected Neural Network (FCBP). The initial neural network is fully connected, after training with sample data using cross-entropy as error function, a clustering method is employed to cluster weights between inputs to hidden layer and from hidden to output layer, and connections that are relatively unnecessary are deleted, thus the initial network becomes a PCBP network.Then PCBP can be used in prediction or data mining by training PCBP with data that comes from database. At the end of this paper, several experiments are conducted to illustrate the effects of PCBP using Iris data set.

  15. Training Data Requirement for a Neural Network to Predict Aerodynamic Coefficients

    Science.gov (United States)

    Korsmeyer, David (Technical Monitor); Rajkumar, T.; Bardina, Jorge

    2003-01-01

    Basic aerodynamic coefficients are modeled as functions of angle of attack, speed brake deflection angle, Mach number, and side slip angle. Most of the aerodynamic parameters can be well-fitted using polynomial functions. We previously demonstrated that a neural network is a fast, reliable way of predicting aerodynamic coefficients. We encountered few under fitted and/or over fitted results during prediction. The training data for the neural network are derived from wind tunnel test measurements and numerical simulations. The basic questions that arise are: how many training data points are required to produce an efficient neural network prediction, and which type of transfer functions should be used between the input-hidden layer and hidden-output layer. In this paper, a comparative study of the efficiency of neural network prediction based on different transfer functions and training dataset sizes is presented. The results of the neural network prediction reflect the sensitivity of the architecture, transfer functions, and training dataset size.

  16. Overlay Multicast Networks : Elements, Architectures and Performance

    OpenAIRE

    Constantinescu, Doru

    2007-01-01

    Today, the telecommunication industry is undergoing two important developments with implications on future architectural solutions. These are the irreversible move towards Internet Protocol (IP)-based networking and the deployment of broadband access. Taken together, these developments offer the opportunity for more advanced and more bandwidth-demanding multimedia applications and services, e. g., IP television (IPTV), Voice over IP (VoIP) and online gaming. A plethora of Quality of Service (...

  17. On neural networks that design neural associative memories.

    Science.gov (United States)

    Chan, H Y; Zak, S H

    1997-01-01

    The design problem of generalized brain-state-in-a-box (GBSB) type associative memories is formulated as a constrained optimization program, and "designer" neural networks for solving the program in real time are proposed. The stability of the designer networks is analyzed using Barbalat's lemma. The analyzed and synthesized neural associative memories do not require symmetric weight matrices. Two types of the GBSB-based associative memories are analyzed, one when the network trajectories are constrained to reside in the hypercube [-1, 1](n) and the other type when the network trajectories are confined to stay in the hypercube [0, 1](n). Numerical examples and simulations are presented to illustrate the results obtained.

  18. Hardware implementation of stochastic spiking neural networks.

    Science.gov (United States)

    Rosselló, Josep L; Canals, Vincent; Morro, Antoni; Oliver, Antoni

    2012-08-01

    Spiking Neural Networks, the last generation of Artificial Neural Networks, are characterized by its bio-inspired nature and by a higher computational capacity with respect to other neural models. In real biological neurons, stochastic processes represent an important mechanism of neural behavior and are responsible of its special arithmetic capabilities. In this work we present a simple hardware implementation of spiking neurons that considers this probabilistic nature. The advantage of the proposed implementation is that it is fully digital and therefore can be massively implemented in Field Programmable Gate Arrays. The high computational capabilities of the proposed model are demonstrated by the study of both feed-forward and recurrent networks that are able to implement high-speed signal filtering and to solve complex systems of linear equations.

  19. Measuring human emotions with modular neural networks and computer vision based applications

    Directory of Open Access Journals (Sweden)

    Veaceslav Albu

    2015-05-01

    Full Text Available This paper describes a neural network architecture for emotion recognition for human-computer interfaces and applied systems. In the current research, we propose a combination of the most recent biometric techniques with the neural networks (NN approach for real-time emotion and behavioral analysis. The system will be tested in real-time applications of customers' behavior for distributed on-land systems, such as kiosks and ATMs.

  20. Fuzzy Logic Module of Convolutional Neural Network for Handwritten Digits Recognition

    Science.gov (United States)

    Popko, E. A.; Weinstein, I. A.

    2016-08-01

    Optical character recognition is one of the important issues in the field of pattern recognition. This paper presents a method for recognizing handwritten digits based on the modeling of convolutional neural network. The integrated fuzzy logic module based on a structural approach was developed. Used system architecture adjusted the output of the neural network to improve quality of symbol identification. It was shown that proposed algorithm was flexible and high recognition rate of 99.23% was achieved.

  1. Simplified Gating in Long Short-term Memory (LSTM) Recurrent Neural Networks

    OpenAIRE

    Lu, Yuzhen; Salem, Fathi M.

    2017-01-01

    The standard LSTM recurrent neural networks while very powerful in long-range dependency sequence applications have highly complex structure and relatively large (adaptive) parameters. In this work, we present empirical comparison between the standard LSTM recurrent neural network architecture and three new parameter-reduced variants obtained by eliminating combinations of the input signal, bias, and hidden unit signals from individual gating signals. The experiments on two sequence datasets ...

  2. Comparative analysis of Recurrent and Finite Impulse Response Neural Networks in Time Series Prediction

    Directory of Open Access Journals (Sweden)

    Milos Miljanovic

    2012-02-01

    Full Text Available The purpose of this paper is to perform evaluation of two different neural network architectures used for solving temporal problems, i.e. time series prediction. The data sets in this project include Mackey-Glass,Sunspots, and Standard & Poor's 500, the stock market index. The study also presents a comparison study on the two networks and their performance.

  3. Pattern Classification using Simplified Neural Networks

    CERN Document Server

    Kamruzzaman, S M

    2010-01-01

    In recent years, many neural network models have been proposed for pattern classification, function approximation and regression problems. This paper presents an approach for classifying patterns from simplified NNs. Although the predictive accuracy of ANNs is often higher than that of other methods or human experts, it is often said that ANNs are practically "black boxes", due to the complexity of the networks. In this paper, we have an attempted to open up these black boxes by reducing the complexity of the network. The factor makes this possible is the pruning algorithm. By eliminating redundant weights, redundant input and hidden units are identified and removed from the network. Using the pruning algorithm, we have been able to prune networks such that only a few input units, hidden units and connections left yield a simplified network. Experimental results on several benchmarks problems in neural networks show the effectiveness of the proposed approach with good generalization ability.

  4. Securing Wireless Sensor Networks: Security Architectures

    Directory of Open Access Journals (Sweden)

    David Boyle

    2008-01-01

    Full Text Available Wireless sensor networking remains one of the most exciting and challenging research domains of our time. As technology progresses, so do the capabilities of sensor networks. Limited only by what can be technologically sensed, it is envisaged that wireless sensor networks will play an important part in our daily lives in the foreseeable future. Privy to many types of sensitive information, both sensed and disseminated, there is a critical need for security in a number of applications related to this technology. Resulting from the continuous debate over the most effective means of securing wireless sensor networks, this paper considers a number of the security architectures employed, and proposed, to date, with this goal in sight. They are presented such that the various characteristics of each protocol are easily identifiable to potential network designers, allowing a more informed decision to be made when implementing a security protocol for their intended application. Authentication is the primary focus, as the most malicious attacks on a network are the work of imposters, such as DOS attacks, packet insertion etc. Authentication can be defined as a security mechanism, whereby, the identity of a node in the network can be identified as a valid node of the network. Subsequently, data authenticity can be achieved; once the integrity of the message sender/receiver has been established.

  5. Artificial Neural Network Approach in Radar Target Classification

    Directory of Open Access Journals (Sweden)

    N. K. Ibrahim

    2009-01-01

    Full Text Available Problem statement: This study unveils the potential and utilization of Neural Network (NN in radar applications for target classification. The radar system under test is a special of it kinds and known as Forward Scattering Radar (FSR. In this study the target is a ground vehicle which is represented by typical public road transport. The features from raw radar signal were extracted manually prior to classification process using Neural Network (NN. Features given to the proposed network model are identified through radar theoretical analysis. Multi-Layer Perceptron (MLP back-propagation neural network trained with three back-propagation algorithm was implemented and analyzed. In NN classifier, the unknown target is sent to the network trained by the known targets to attain the accurate output. Approach: Two types of classifications were analyzed. The first one is to classify the exact type of vehicle, four vehicle types were selected. The second objective is to grouped vehicle into their categories. The proposed NN architecture is compared to the K Nearest Neighbor classifier and the performance is evaluated. Results: Based on the results, the proposed NN provides a higher percentage of successful classification than the KNN classifier. Conclusion/Recommendation: The result presented here show that NN can be effectively employed in radar classification applications.

  6. Convolutional Neural Networks Applied to House Numbers Digit Classification

    CERN Document Server

    Sermanet, Pierre; LeCun, Yann

    2012-01-01

    We classify digits of real-world house numbers using convolutional neural networks (ConvNets). ConvNets are hierarchical feature learning neural networks whose structure is biologically inspired. Unlike many popular vision approaches that are hand-designed, ConvNets can automatically learn a unique set of features optimized for a given task. We augmented the traditional ConvNet architecture by learning multi-stage features and by using Lp pooling and establish a new state-of-the-art of 94.85% accuracy on the SVHN dataset (45.2% error improvement). Furthermore, we analyze the benefits of different pooling methods and multi-stage features in ConvNets. The source code and a tutorial are available at eblearn.sf.net.

  7. Measuring Customer Behavior with Deep Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Veaceslav Albu

    2016-03-01

    Full Text Available In this paper, we propose a neural network model for human emotion and gesture classification. We demonstrate that the proposed architecture represents an effective tool for real-time processing of customer's behavior for distributed on-land systems, such as information kiosks, automated cashiers and ATMs. The proposed approach combines most recent biometric techniques with the neural network approach for real-time emotion and behavioral analysis. In the series of experiments, emotions of human subjects were recorded, recognized, and analyzed to give statistical feedback of the overall emotions of a number of targets within a certain time frame. The result of the study allows automatic tracking of user’s behavior based on a limited set of observations.

  8. Measuring Customer Behavior with Deep Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Veaceslav Albu

    2016-03-01

    Full Text Available In this paper, we propose a neural network model for human emotion and gesture classification. We demonstrate that the proposed architecture represents an effective tool for real-time processing of customer's behavior for distributed on-land systems, such as information kiosks, automated cashiers and ATMs. The proposed approach combines most recent biometric techniques with the neural network approach for real-time emotion and behavioral analysis. In the series of experiments, emotions of human subjects were recorded, recognized, and analyzed to give statistical feedback of the overall emotions of a number of targets within a certain time frame. The result of the study allows automatic tracking of user’s behavior based on a limited set of observations.

  9. Feed Forward Neural Network and Optimal Control Problem with Control and State Constraints

    Science.gov (United States)

    Kmet', Tibor; Kmet'ová, Mária

    2009-09-01

    A feed forward neural network based optimal control synthesis is presented for solving optimal control problems with control and state constraints. The paper extends adaptive critic neural network architecture proposed by [5] to the optimal control problems with control and state constraints. The optimal control problem is transcribed into a nonlinear programming problem which is implemented with adaptive critic neural network. The proposed simulation method is illustrated by the optimal control problem of nitrogen transformation cycle model. Results show that adaptive critic based systematic approach holds promise for obtaining the optimal control with control and state constraints.

  10. An Artificial Neural Network Model for the Wholesale Company Order's Cycle Management

    Directory of Open Access Journals (Sweden)

    Tereza Sustrova

    2016-06-01

    Full Text Available The purpose of this article is to verify the possibility of using artificial neural networks (ANN in business management processes, primarily in the area of supply chain management. The author has designed several neural network models featuring different architectures to optimize the level of the company’s inventory. The results of the research show that ANN can be used for managing a company’s order cycle and lead to reduced levels of goods purchased and storage costs. Optimal neural networks show suitable results for subsequent prediction of the amount of items to be ordered and for achieving reduced inventory purchase and keeping costs down.

  11. Neural Network Based on Rough Sets and Its Application to Remote Sensing Image Classification

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    This paper presents a new kind of back propagation neural network (BPNN) based on rough sets,called rough back propagation neural network (RBPNN).The architecture and training method of RBPNN are presented and the survey and analysis of RBPNN for the classification of remote sensing multi-spectral image is discussed.The successful application of RBPNN to a land cover classification illustrates the simple computation and high accuracy of the new neural network and the flexibility and practicality of this new approach.

  12. Artificial Neural Networks and Instructional Technology.

    Science.gov (United States)

    Carlson, Patricia A.

    1991-01-01

    Artificial neural networks (ANN), part of artificial intelligence, are discussed. Such networks are fed sample cases (training sets), learn how to recognize patterns in the sample data, and use this experience in handling new cases. Two cognitive roles for ANNs (intelligent filters and spreading, associative memories) are examined. Prototypes…

  13. Learning drifting concepts with neural networks

    NARCIS (Netherlands)

    Biehl, Michael; Schwarze, Holm

    1993-01-01

    The learning of time-dependent concepts with a neural network is studied analytically and numerically. The linearly separable target rule is represented by an N-vector, whose time dependence is modelled by a random or deterministic drift process. A single-layer network is trained online using differ

  14. Estimating Conditional Distributions by Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1998-01-01

    Neural Networks for estimating conditionaldistributions and their associated quantiles are investigated in this paper. A basic network structure is developed on the basis of kernel estimation theory, and consistency property is considered from a mild set of assumptions. A number of applications...

  15. Artificial Neural Networks and Instructional Technology.

    Science.gov (United States)

    Carlson, Patricia A.

    1991-01-01

    Artificial neural networks (ANN), part of artificial intelligence, are discussed. Such networks are fed sample cases (training sets), learn how to recognize patterns in the sample data, and use this experience in handling new cases. Two cognitive roles for ANNs (intelligent filters and spreading, associative memories) are examined. Prototypes…

  16. Neural networks as perpetual information generators

    Science.gov (United States)

    Englisch, Harald; Xiao, Yegao; Yao, Kailun

    1991-07-01

    The information gain in a neural network cannot be larger than the bit capacity of the synapses. It is shown that the equation derived by Engel et al. [Phys. Rev. A 42, 4998 (1990)] for the strongly diluted network with persistent stimuli contradicts this condition. Furthermore, for any time step the correct equation is derived by taking the correlation between random variables into account.

  17. Prediction of daily sea surface temperature using efficient neural networks

    Science.gov (United States)

    Patil, Kalpesh; Deo, Makaranad Chintamani

    2017-04-01

    Short-term prediction of sea surface temperature (SST) is commonly achieved through numerical models. Numerical approaches are more suitable for use over a large spatial domain than in a specific site because of the difficulties involved in resolving various physical sub-processes at local levels. Therefore, for a given location, a data-driven approach such as neural networks may provide a better alternative. The application of neural networks, however, needs a large experimentation in their architecture, training methods, and formation of appropriate input-output pairs. A network trained in this manner can provide more attractive results if the advances in network architecture are additionally considered. With this in mind, we propose the use of wavelet neural networks (WNNs) for prediction of daily SST values. The prediction of daily SST values was carried out using WNN over 5 days into the future at six different locations in the Indian Ocean. First, the accuracy of site-specific SST values predicted by a numerical model, ROMS, was assessed against the in situ records. The result pointed out the necessity for alternative approaches. First, traditional networks were tried and after noticing their poor performance, WNN was used. This approach produced attractive forecasts when judged through various error statistics. When all locations were viewed together, the mean absolute error was within 0.18 to 0.32 °C for a 5-day-ahead forecast. The WNN approach was thus found to add value to the numerical method of SST prediction when location-specific information is desired.

  18. Prediction of daily sea surface temperature using efficient neural networks

    Science.gov (United States)

    Patil, Kalpesh; Deo, Makaranad Chintamani

    2017-02-01

    Short-term prediction of sea surface temperature (SST) is commonly achieved through numerical models. Numerical approaches are more suitable for use over a large spatial domain than in a specific site because of the difficulties involved in resolving various physical sub-processes at local levels. Therefore, for a given location, a data-driven approach such as neural networks may provide a better alternative. The application of neural networks, however, needs a large experimentation in their architecture, training methods, and formation of appropriate input-output pairs. A network trained in this manner can provide more attractive results if the advances in network architecture are additionally considered. With this in mind, we propose the use of wavelet neural networks (WNNs) for prediction of daily SST values. The prediction of daily SST values was carried out using WNN over 5 days into the future at six different locations in the Indian Ocean. First, the accuracy of site-specific SST values predicted by a numerical model, ROMS, was assessed against the in situ records. The result pointed out the necessity for alternative approaches. First, traditional networks were tried and after noticing their poor performance, WNN was used. This approach produced attractive forecasts when judged through various error statistics. When all locations were viewed together, the mean absolute error was within 0.18 to 0.32 °C for a 5-day-ahead forecast. The WNN approach was thus found to add value to the numerical method of SST prediction when location-specific information is desired.

  19. A quantum-implementable neural network model

    Science.gov (United States)

    Chen, Jialin; Wang, Lingli; Charbon, Edoardo

    2017-10-01

    A quantum-implementable neural network, namely quantum probability neural network (QPNN) model, is proposed in this paper. QPNN can use quantum parallelism to trace all possible network states to improve the result. Due to its unique quantum nature, this model is robust to several quantum noises under certain conditions, which can be efficiently implemented by the qubus quantum computer. Another advantage is that QPNN can be used as memory to retrieve the most relevant data and even to generate new data. The MATLAB experimental results of Iris data classification and MNIST handwriting recognition show that much less neuron resources are required in QPNN to obtain a good result than the classical feedforward neural network. The proposed QPNN model indicates that quantum effects are useful for real-life classification tasks.

  20. Neural Network Approaches to Visual Motion Perception

    Institute of Scientific and Technical Information of China (English)

    郭爱克; 杨先一

    1994-01-01

    This paper concerns certain difficult problems in image processing and perception: neuro-computation of visual motion information. The first part of this paper deals with the spatial physiological integration by the figure-ground discrimination neural network in the visual system of the fly. We have outlined the fundamental organization and algorithms of this neural network, and mainly concentrated on the results of computer simulations of spatial physiological integration. It has been shown that the gain control mechanism , the nonlinearity of synaptic transmission characteristic , the interaction between the two eyes , and the directional selectivity of the pool cells play decisive roles in the spatial physiological integration. In the second part, we have presented a self-organizing neural network for the perception of visual motion by using a retinotopic array of Reichardt’s motion detectors and Kohonen’s self-organizing maps. It .has been demonstrated by computer simulations that the network is abl

  1. A Combined Network Architecture Using Art2 and Back Propagation for Adaptive Estimation of Dynamic Processes

    Directory of Open Access Journals (Sweden)

    Einar Sørheim

    1990-10-01

    Full Text Available A neural network architecture called ART2/BP is proposed. Thc goal has been to construct an artificial neural network that learns incrementally an unknown mapping, and is motivated by the instability found in back propagation (BP networks: after first learning pattern A and then pattern B, a BP network often has completely 'forgotten' pattern A. A network using both supervised and unsupervised training is proposed, consisting of a combination of ART2 and BP. ART2 is used to build and focus a supervised backpropagation network consisting of many small subnetworks each specialized on a particular domain of the input space. The ART2/BP network has the advantage of being able to dynamically expand itself in response to input patterns containing new information. Simulation results show that the ART2/BP network outperforms a classical maximum likelihood method for the estimation of a discrete dynamic and nonlinear transfer function.

  2. Stability analysis of discrete-time BAM neural networks based on standard neural network models

    Institute of Scientific and Technical Information of China (English)

    ZHANG Sen-lin; LIU Mei-qin

    2005-01-01

    To facilitate stability analysis of discrete-time bidirectional associative memory (BAM) neural networks, they were converted into novel neural network models, termed standard neural network models (SNNMs), which interconnect linear dynamic systems and bounded static nonlinear operators. By combining a number of different Lyapunov functionals with S-procedure, some useful criteria of global asymptotic stability and global exponential stability of the equilibrium points of SNNMs were derived. These stability conditions were formulated as linear matrix inequalities (LMIs). So global stability of the discrete-time BAM neural networks could be analyzed by using the stability results of the SNNMs. Compared to the existing stability analysis methods, the proposed approach is easy to implement, less conservative, and is applicable to other recurrent neural networks.

  3. Neural-networks-based Modelling and a Fuzzy Neural Networks Controller of MCFC

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Molten Carbonate Fuel Cells (MCFC) are produced with a highly efficient and clean power generation technology which will soon be widely utilized. The temperature characters of MCFC stack are briefly analyzed. A radial basis function (RBF) neural networks identification technology is applied to set up the temperature nonlinear model of MCFC stack, and the identification structure, algorithm and modeling training process are given in detail. A fuzzy controller of MCFC stack is designed. In order to improve its online control ability, a neural network trained by the I/O data of a fuzzy controller is designed. The neural networks can memorize and expand the inference rules of the fuzzy controller and substitute for the fuzzy controller to control MCFC stack online. A detailed design of the controller is given. The validity of MCFC stack modelling based on neural networks and the superior performance of the fuzzy neural networks controller are proved by Simulations.

  4. Living ordered neural networks as model systems for signal processing

    Science.gov (United States)

    Villard, C.; Amblard, P. O.; Becq, G.; Gory-Fauré, S.; Brocard, J.; Roth, S.

    2007-06-01

    Neural circuit architecture is a fundamental characteristic of the brain, and how architecture is bound to biological functions is still an open question. Some neuronal geometries seen in the retina or the cochlea are intriguing: information is processed in parallel by several entities like in "pooling" networks which have recently drawn the attention of signal processing scientists. These systems indeed exhibit the noise-enhanced processing effect, which is also actively discussed in the neuroscience community at the neuron scale. The aim of our project is to use in-vitro ordered neuron networks as living paradigms to test ideas coming from the computational science. The different technological bolts that have to be solved are enumerated and the first results are presented. A neuron is a polarised cell, with an excitatory axon and a receiving dendritic tree. We present how soma confinement and axon differentiation can be induced by surface functionalization techniques. The recording of large neuron networks, ordered or not, is also detailed and biological signals shown. The main difficulty to access neural noise in the case of weakly connected networks grown on micro electrode arrays is explained. This open the door to a new detection technology suitable for sub-cellular analysis and stimulation, whose development will constitute the next step of this project.

  5. Network architecture underlying maximal separation of neuronal representations

    Directory of Open Access Journals (Sweden)

    Ron A Jortner

    2013-01-01

    Full Text Available One of the most basic and general tasks faced by all nervous systems is extracting relevant information from the organism’s surrounding world. While physical signals available to sensory systems are often continuous, variable, overlapping and noisy, high-level neuronal representations used for decision-making tend to be discrete, specific, invariant, and highly separable. This study addresses the question of how neuronal specificity is generated. Inspired by experimental findings on network architecture in the olfactory system of the locust, I construct a highly simplified theoretical framework which allows for analytic solution of its key properties. For generalized feed-forward systems, I show that an intermediate range of connectivity values between source- and target-populations leads to a combinatorial explosion of wiring possibilities, resulting in input spaces which are, by their very nature, exquisitely sparsely populated. In particular, connection probability ½, as found in the locust antennal-lobe–mushroom-body circuit, serves to maximize separation of neuronal representations across the target Kenyon-cells, and explains their specific and reliable responses. This analysis yields a function expressing response specificity in terms of lower network-parameters; together with appropriate gain control this leads to a simple neuronal algorithm for generating arbitrarily sparse and selective codes and linking network architecture and neural coding. I suggest a way to easily construct ecologically meaningful representations from this code.

  6. Navigation Architecture for a Space Mobile Network

    Science.gov (United States)

    Valdez, Jennifer E.; Ashman, Benjamin; Gramling, Cheryl; Heckler, Gregory W.; Carpenter, Russell

    2016-01-01

    The Tracking and Data Relay Satellite System (TDRSS) Augmentation Service for Satellites (TASS) is a proposed beacon service to provide a global, space based GPS augmentation service based on the NASA Global Differential GPS (GDGPS) System. The TASS signal will be tied to the GPS time system and usable as an additional ranging and Doppler radiometric source. Additionally, it will provide data vital to autonomous navigation in the near Earth regime, including space weather information, TDRS ephemerides, Earth Orientation Parameters (EOP), and forward commanding capability. TASS benefits include enhancing situational awareness, enabling increased autonomy, and providing near real-time command access for user platforms. As NASA Headquarters' Space Communication and Navigation Office (SCaN) begins to move away from a centralized network architecture and towards a Space Mobile Network (SMN) that allows for user initiated services, autonomous navigation will be a key part of such a system. This paper explores how a TASS beacon service enables the Space Mobile Networking paradigm, what a typical user platform would require, and provides an in-depth analysis of several navigation scenarios and operations concepts. This paper provides an overview of the TASS beacon and its role within the SMN and user community. Supporting navigation analysis is presented for two user mission scenarios: an Earth observing spacecraft in low earth orbit (LEO), and a highly elliptical spacecraft in a lunar resonance orbit. These diverse flight scenarios indicate the breadth of applicability of the TASS beacon for upcoming users within the current network architecture and in the SMN.

  7. NATO Human View Architecture and Human Networks

    Science.gov (United States)

    Handley, Holly A. H.; Houston, Nancy P.

    2010-01-01

    The NATO Human View is a system architectural viewpoint that focuses on the human as part of a system. Its purpose is to capture the human requirements and to inform on how the human impacts the system design. The viewpoint contains seven static models that include different aspects of the human element, such as roles, tasks, constraints, training and metrics. It also includes a Human Dynamics component to perform simulations of the human system under design. One of the static models, termed Human Networks, focuses on the human-to-human communication patterns that occur as a result of ad hoc or deliberate team formation, especially teams distributed across space and time. Parameters of human teams that effect system performance can be captured in this model. Human centered aspects of networks, such as differences in operational tempo (sense of urgency), priorities (common goal), and team history (knowledge of the other team members), can be incorporated. The information captured in the Human Network static model can then be included in the Human Dynamics component so that the impact of distributed teams is represented in the simulation. As the NATO militaries transform to a more networked force, the Human View architecture is an important tool that can be used to make recommendations on the proper mix of technological innovations and human interactions.

  8. A neural architecture for nonlinear adaptive filtering of time series

    DEFF Research Database (Denmark)

    Hoffmann, Nils; Larsen, Jan

    1991-01-01

    A neural architecture for adaptive filtering which incorporates a modularization principle is proposed. It facilitates a sparse parameterization, i.e. fewer parameters have to be estimated in a supervised training procedure. The main idea is to use a preprocessor which determines the dimension...... of the input space and can be designed independently of the subsequent nonlinearity. Two suggestions for the preprocessor are presented: the derivative preprocessor and the principal component analysis. A novel implementation of fixed Volterra nonlinearities is given. It forces the boundedness...

  9. Dynamic pricing by hopfield neural network

    Institute of Scientific and Technical Information of China (English)

    Lusajo M Minga; FENG Yu-qiang(冯玉强); LI Yi-jun(李一军); LU Yang(路杨); Kimutai Kimeli

    2004-01-01

    The increase in the number of shopbots users in e-commerce has triggered flexibility of sellers in their pricing strategies. Sellers see the importance of automated price setting which provides efficient services to a large number of buyers who are using shopbots. This paper studies the characteristic of decreasing energy with time in a continuous model of a Hopfield neural network that is the decreasing of errors in the network with respect to time. The characteristic shows that it is possible to use Hopfield neural network to get the main factor of dynamic pricing; the least variable cost, from production function principles. The least variable cost is obtained by reducing or increasing the input combination factors, and then making the comparison of the network output with the desired output, where the difference between the network output and desired output will be decreasing in the same manner as in the Hopfield neural network energy. Hopfield neural network will simplify the rapid change of prices in e-commerce during transaction that depends on the demand quantity for demand sensitive model of pricing.

  10. Neutron spectrometry with artificial neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.; Rodriguez, J.M.; Mercado S, G.A. [Universidad Autonoma de Zacatecas, A.P. 336, 98000 Zacatecas (Mexico); Iniguez de la Torre Bayo, M.P. [Universidad de Valladolid, Valladolid (Spain); Barquero, R. [Hospital Universitario Rio Hortega, Valladolid (Spain); Arteaga A, T. [Envases de Zacatecas, S.A. de C.V., Zacatecas (Mexico)]. e-mail: rvega@cantera.reduaz.mx

    2005-07-01

    An artificial neural network has been designed to obtain the neutron spectra from the Bonner spheres spectrometer's count rates. The neural network was trained using 129 neutron spectra. These include isotopic neutron sources; reference and operational spectra from accelerators and nuclear reactors, spectra from mathematical functions as well as few energy groups and monoenergetic spectra. The spectra were transformed from lethargy to energy distribution and were re-bin ned to 31 energy groups using the MCNP 4C code. Re-binned spectra and UTA4 response matrix were used to calculate the expected count rates in Bonner spheres spectrometer. These count rates were used as input and the respective spectrum was used as output during neural network training. After training the network was tested with the Bonner spheres count rates produced by a set of neutron spectra. This set contains data used during network training as well as data not used. Training and testing was carried out in the Mat lab program. To verify the network unfolding performance the original and unfolded spectra were compared using the {chi}{sup 2}-test and the total fluence ratios. The use of Artificial Neural Networks to unfold neutron spectra in neutron spectrometry is an alternative procedure that overcomes the drawbacks associated in this ill-conditioned problem. (Author)

  11. Representations in neural network based empirical potentials

    Science.gov (United States)

    Cubuk, Ekin D.; Malone, Brad D.; Onat, Berk; Waterland, Amos; Kaxiras, Efthimios

    2017-07-01

    Many structural and mechanical properties of crystals, glasses, and biological macromolecules can be modeled from the local interactions between atoms. These interactions ultimately derive from the quantum nature of electrons, which can be prohibitively expensive to simulate. Machine learning has the potential to revolutionize materials modeling due to its ability to efficiently approximate complex functions. For example, neural networks can be trained to reproduce results of density functional theory calculations at a much lower cost. However, how neural networks reach their predictions is not well understood, which has led to them being used as a "black box" tool. This lack of understanding is not desirable especially for applications of neural networks in scientific inquiry. We argue that machine learning models trained on physical systems can be used as more than just approximations since they had to "learn" physical concepts in order to reproduce the labels they were trained on. We use dimensionality reduction techniques to study in detail the representation of silicon atoms at different stages in a neural network, which provides insight into how a neural network learns to model atomic interactions.

  12. Using neural networks to describe tracer correlations

    Directory of Open Access Journals (Sweden)

    D. J. Lary

    2004-01-01

    Full Text Available Neural networks are ideally suited to describe the spatial and temporal dependence of tracer-tracer correlations. The neural network performs well even in regions where the correlations are less compact and normally a family of correlation curves would be required. For example, the CH4-N2O correlation can be well described using a neural network trained with the latitude, pressure, time of year, and methane volume mixing ratio (v.m.r.. In this study a neural network using Quickprop learning and one hidden layer with eight nodes was able to reproduce the CH4-N2O correlation with a correlation coefficient between simulated and training values of 0.9995. Such an accurate representation of tracer-tracer correlations allows more use to be made of long-term datasets to constrain chemical models. Such as the dataset from the Halogen Occultation Experiment (HALOE which has continuously observed CH4  (but not N2O from 1991 till the present. The neural network Fortran code used is available for download.

  13. Proposal for an All-Spin Artificial Neural Network: Emulating Neural and Synaptic Functionalities Through Domain Wall Motion in Ferromagnets.

    Science.gov (United States)

    Sengupta, Abhronil; Shim, Yong; Roy, Kaushik

    2016-12-01

    Non-Boolean computing based on emerging post-CMOS technologies can potentially pave the way for low-power neural computing platforms. However, existing work on such emerging neuromorphic architectures have either focused on solely mimicking the neuron, or the synapse functionality. While memristive devices have been proposed to emulate biological synapses, spintronic devices have proved to be efficient at performing the thresholding operation of the neuron at ultra-low currents. In this work, we propose an All-Spin Artificial Neural Network where a single spintronic device acts as the basic building block of the system. The device offers a direct mapping to synapse and neuron functionalities in the brain while inter-layer network communication is accomplished via CMOS transistors. To the best of our knowledge, this is the first demonstration of a neural architecture where a single nanoelectronic device is able to mimic both neurons and synapses. The ultra-low voltage operation of low resistance magneto-metallic neurons enables the low-voltage operation of the array of spintronic synapses, thereby leading to ultra-low power neural architectures. Device-level simulations, calibrated to experimental results, was used to drive the circuit and system level simulations of the neural network for a standard pattern recognition problem. Simulation studies indicate energy savings by  ∼  100× in comparison to a corresponding digital/analog CMOS neuron implementation.

  14. Optimality: from neural networks to universal grammar.

    Science.gov (United States)

    Prince, A; Smolensky, P

    1997-03-14

    Can concepts from the theory of neural computation contribute to formal theories of the mind? Recent research has explored the implications of one principle of neural computation, optimization, for the theory of grammar. Optimization over symbolic linguistic structures provides the core of a new grammatical architecture, optimality theory. The proposition that grammaticality equals optimality sheds light on a wide range of phenomena, from the gulf between production and comprehension in child language, to language learnability, to the fundamental questions of linguistic theory: What is it that the grammars of all languages share, and how may they differ?

  15. A framework for plasticity implementation on the SpiNNaker neural architecture

    Directory of Open Access Journals (Sweden)

    Francesco eGalluppi

    2015-01-01

    Full Text Available Many of the precise biological mechanisms of synaptic plasticity remain elusive, but simulations of neural networks have greatly enhanced our understanding of how specific global functions arise from the massively parallel computation of neurons and local Hebbian or spike-timing dependent plasticity rules. For simulating large portions of neural tissue, this has created an increasingly strong need for large scale simulations of plastic neural networks on special purpose hardware platforms, because synaptic transmissions and updates are badly matched to computing style supported by current architectures. Because of the great diversity of biological plasticity phenomena and the corresponding diversity of models, there is a great need for testing various hypotheses about plasticity before committing to one hardware implementation. Here we present a novel framework for investigating different plasticity approaches on the SpiNNaker distributed digital neural simulation platform.The key innovation of the proposed architecture is to exploit the reconfigurability of the ARM processors inside SpiNNaker, dedicating a subset of them exclusively to process synaptic plasticity updates, while the rest perform the usual neural and synaptic simulations. We demonstrate the flexibility of the proposed approach by showing the implementation of a variety of spike- and rate-based learning rules, including standard Spike-Timing dependent plasticity (STDP, voltage-dependent STDP, and the rate-based BCM rule.We analyze their performance and validate them by running classical learning experiments in real time on a 4-chip SpiNNaker board. The result is an efficient, modular, flexible and scalable framework, which provides a valuable tool for the fast and easy exploration of learning models of very different kinds on the parallel and reconfigurable SpiNNaker system.

  16. A framework for plasticity implementation on the SpiNNaker neural architecture.

    Science.gov (United States)

    Galluppi, Francesco; Lagorce, Xavier; Stromatias, Evangelos; Pfeiffer, Michael; Plana, Luis A; Furber, Steve B; Benosman, Ryad B

    2014-01-01

    Many of the precise biological mechanisms of synaptic plasticity remain elusive, but simulations of neural networks have greatly enhanced our understanding of how specific global functions arise from the massively parallel computation of neurons and local Hebbian or spike-timing dependent plasticity rules. For simulating large portions of neural tissue, this has created an increasingly strong need for large scale simulations of plastic neural networks on special purpose hardware platforms, because synaptic transmissions and updates are badly matched to computing style supported by current architectures. Because of the great diversity of biological plasticity phenomena and the corresponding diversity of models, there is a great need for testing various hypotheses about plasticity before committing to one hardware implementation. Here we present a novel framework for investigating different plasticity approaches on the SpiNNaker distributed digital neural simulation platform. The key innovation of the proposed architecture is to exploit the reconfigurability of the ARM processors inside SpiNNaker, dedicating a subset of them exclusively to process synaptic plasticity updates, while the rest perform the usual neural and synaptic simulations. We demonstrate the flexibility of the proposed approach by showing the implementation of a variety of spike- and rate-based learning rules, including standard Spike-Timing dependent plasticity (STDP), voltage-dependent STDP, and the rate-based BCM rule. We analyze their performance and validate them by running classical learning experiments in real time on a 4-chip SpiNNaker board. The result is an efficient, modular, flexible and scalable framework, which provides a valuable tool for the fast and easy exploration of learning models of very different kinds on the parallel and reconfigurable SpiNNaker system.

  17. Estimates on compressed neural networks regression.

    Science.gov (United States)

    Zhang, Yongquan; Li, Youmei; Sun, Jianyong; Ji, Jiabing

    2015-03-01

    When the neural element number n of neural networks is larger than the sample size m, the overfitting problem arises since there are more parameters than actual data (more variable than constraints). In order to overcome the overfitting problem, we propose to reduce the number of neural elements by using compressed projection A which does not need to satisfy the condition of Restricted Isometric Property (RIP). By applying probability inequalities and approximation properties of the feedforward neural networks (FNNs), we prove that solving the FNNs regression learning algorithm in the compressed domain instead of the original domain reduces the sample error at the price of an increased (but controlled) approximation error, where the covering number theory is used to estimate the excess error, and an upper bound of the excess error is given.

  18. Community structure of complex networks based on continuous neural network

    Science.gov (United States)

    Dai, Ting-ting; Shan, Chang-ji; Dong, Yan-shou

    2017-09-01

    As a new subject, the research of complex networks has attracted the attention of researchers from different disciplines. Community structure is one of the key structures of complex networks, so it is a very important task to analyze the community structure of complex networks accurately. In this paper, we study the problem of extracting the community structure of complex networks, and propose a continuous neural network (CNN) algorithm. It is proved that for any given initial value, the continuous neural network algorithm converges to the eigenvector of the maximum eigenvalue of the network modularity matrix. Therefore, according to the stability of the evolution of the network symbol will be able to get two community structure.

  19. Identification and Position Control of Marine Helm using Artificial Neural Network Neural Network

    Directory of Open Access Journals (Sweden)

    Hui ZHU

    2008-02-01

    Full Text Available If nonlinearities such as saturation of the amplifier gain and motor torque, gear backlash, and shaft compliances- just to name a few - are considered in the position control system of marine helm, traditional control methods are no longer sufficient to be used to improve the performance of the system. In this paper an alternative approach to traditional control methods - a neural network reference controller - is proposed to establish an adaptive control of the position of the marine helm to achieve the controlled variable at the command position. This neural network controller comprises of two neural networks. One is the plant model network used to identify the nonlinear system and the other the controller network used to control the output to follow the reference model. The experimental results demonstrate that this adaptive neural network reference controller has much better control performance than is obtained with traditional controllers.

  20. Transient stability analysis of electric energy systems via a fuzzy ART-ARTMAP neural network

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, Wagner Peron; Silveira, Maria do Carmo G.; Lotufo, AnnaDiva P.; Minussi, Carlos. R. [Department of Electrical Engineering, Sao Paulo State University (UNESP), P.O. Box 31, 15385-000, Ilha Solteira, SP (Brazil)

    2006-04-15

    This work presents a methodology to analyze transient stability (first oscillation) of electric energy systems, using a neural network based on ART architecture (adaptive resonance theory), named fuzzy ART-ARTMAP neural network for real time applications. The security margin is used as a stability analysis criterion, considering three-phase short circuit faults with a transmission line outage. The neural network operation consists of two fundamental phases: the training and the analysis. The training phase needs a great quantity of processing for the realization, while the analysis phase is effectuated almost without computation effort. This is, therefore the principal purpose to use neural networks for solving complex problems that need fast solutions, as the applications in real time. The ART neural networks have as primordial characteristics the plasticity and the stability, which are essential qualities to the training execution and to an efficient analysis. The fuzzy ART-ARTMAP neural network is proposed seeking a superior performance, in terms of precision and speed, when compared to conventional ARTMAP, and much more when compared to the neural networks that use the training by backpropagation algorithm, which is a benchmark in neural network area. (author)

  1. Neural Network Architectures for General Image Recognition.

    Science.gov (United States)

    1992-07-21

    some of the 52 Brodmann areas of the human brain by shading some of the areas associated with vision. More sophisticated experimental techniques...CORTEX SORY FRONTAL LOBE PARIETAL LOBE BODY RADIATIONS CRE Figure 3. Schematics of the human brain shouinn the major structures and the Brodmann areas ...structures and the Broa- mann areas . 10 4 Schematic of visual projections to the brain areas involved in vision. 11 5 A block diagram of the vision system

  2. Compact 4-D Optical Neural Network Architecture

    Science.gov (United States)

    1990-04-25

    again separated, with each being reimaged onto the cooled CCD detector arrays. I D Array I IImage CiD Array Figure 35. Details of the optical elements...P. Thijssen, R. Van Den Berg and S. Volker, Chemical Physics Letters, 120 (1985) 503. 39. A. R. Gutierrez , J. Friedrich, D. Haarer and H. Wolfrum...Silbey, "Reversible and Irreversible Line Broadening of Photo- chemical Holes in Amorphous Solids," Chem. Phys. Lett. 95 (1983) 119. Gutierrez , A. R., J

  3. Digital systems for artificial neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Atlas, L.E. (Interactive Systems Design Lab., Univ. of Washington, WA (US)); Suzuki, Y. (NTT Human Interface Labs. (US))

    1989-11-01

    A tremendous flurry of research activity has developed around artificial neural systems. These systems have also been tested in many applications, often with positive results. Most of this work has taken place as digital simulations on general-purpose serial or parallel digital computers. Specialized neural network emulation systems have also been developed for more efficient learning and use. The authors discussed how dedicated digital VLSI integrated circuits offer the highest near-term future potential for this technology.

  4. Artificial Neural Networks for Diagnosis of Kidney Stones Disease

    Directory of Open Access Journals (Sweden)

    Koushal Kumar

    2012-07-01

    Full Text Available Artificial Neural networks are often used as a powerful discriminating classifier for tasks in medical diagnosis for early detection of diseases. They have several advantages over parametric classifiers such as discriminate analysis. The objective of this paper is to diagnose kidney stone disease by using three different neural network algorithms which have different architecture and characteristics. The aim of this work is to compare the performance of all three neural networks on the basis of its accuracy, time taken to build model, and training data set size. We will use Learning vector quantization (LVQ, two layers feed forward perceptron trained with back propagation training algorithm and Radial basis function (RBF networks for diagnosis of kidney stone disease. In this work we used Waikato Environment for Knowledge Analysis (WEKA version 3.7.5 as simulation tool which is an open source tool. The data set we used for diagnosis is real world data with 1000 instances and 8 attributes. In the end part we check the performance comparison of different algorithms to propose the best algorithm for kidney stone diagnosis. So this will helps in early identification of kidney stone in patients and reduces the diagnosis time.

  5. A Survey of 5G Network: Architecture and Emerging Technologies

    National Research Council Canada - National Science Library

    Gupta, A; Jha, R. K

    2015-01-01

    .... This paper presents the results of a detailed survey on the fifth generation (5G) cellular network architecture and some of the key emerging technologies that are helpful in improving the architecture and meeting the demands of users...

  6. Natural language acquisition in large scale neural semantic networks

    Science.gov (United States)

    Ealey, Douglas

    This thesis puts forward the view that a purely signal- based approach to natural language processing is both plausible and desirable. By questioning the veracity of symbolic representations of meaning, it argues for a unified, non-symbolic model of knowledge representation that is both biologically plausible and, potentially, highly efficient. Processes to generate a grounded, neural form of this model-dubbed the semantic filter-are discussed. The combined effects of local neural organisation, coincident with perceptual maturation, are used to hypothesise its nature. This theoretical model is then validated in light of a number of fundamental neurological constraints and milestones. The mechanisms of semantic and episodic development that the model predicts are then used to explain linguistic properties, such as propositions and verbs, syntax and scripting. To mimic the growth of locally densely connected structures upon an unbounded neural substrate, a system is developed that can grow arbitrarily large, data- dependant structures composed of individual self- organising neural networks. The maturational nature of the data used results in a structure in which the perception of concepts is refined by the networks, but demarcated by subsequent structure. As a consequence, the overall structure shows significant memory and computational benefits, as predicted by the cognitive and neural models. Furthermore, the localised nature of the neural architecture also avoids the increasing error sensitivity and redundancy of traditional systems as the training domain grows. The semantic and episodic filters have been demonstrated to perform as well, or better, than more specialist networks, whilst using significantly larger vocabularies, more complex sentence forms and more natural corpora.

  7. FPGA Implementations of Feed Forward Neural Network by using Floating Point Hardware Accelerators

    Directory of Open Access Journals (Sweden)

    Gabriele-Maria Lozito

    2014-01-01

    Full Text Available This paper documents the research towards the analysis of different solutions to implement a Neural Network architecture on a FPGA design by using floating point accelerators. In particular, two different implementations are investigated: a high level solution to create a neural network on a soft processor design, with different strategies for enhancing the performance of the process; a low level solution, achieved by a cascade of floating point arithmetic elements. Comparisons of the achieved performance in terms of both time consumptions and FPGA resources employed for the architectures are presented.

  8. Equivalence of Conventional and Modified Network of Generalized Neural Elements

    Directory of Open Access Journals (Sweden)

    E. V. Konovalov

    2016-01-01

    Full Text Available The article is devoted to the analysis of neural networks consisting of generalized neural elements. The first part of the article proposes a new neural network model — a modified network of generalized neural elements (MGNE-network. This network developes the model of generalized neural element, whose formal description contains some flaws. In the model of the MGNE-network these drawbacks are overcome. A neural network is introduced all at once, without preliminary description of the model of a single neural element and method of such elements interaction. The description of neural network mathematical model is simplified and makes it relatively easy to construct on its basis a simulation model to conduct numerical experiments. The model of the MGNE-network is universal, uniting properties of networks consisting of neurons-oscillators and neurons-detectors. In the second part of the article we prove the equivalence of the dynamics of the two considered neural networks: the network, consisting of classical generalized neural elements, and MGNE-network. We introduce the definition of equivalence in the functioning of the generalized neural element and the MGNE-network consisting of a single element. Then we introduce the definition of the equivalence of the dynamics of the two neural networks in general. It is determined the correlation of different parameters of the two considered neural network models. We discuss the issue of matching the initial conditions of the two considered neural network models. We prove the theorem about the equivalence of the dynamics of the two considered neural networks. This theorem allows us to apply all previously obtained results for the networks, consisting of classical generalized neural elements, to the MGNE-network.

  9. The Analysis of User Behaviour of a Network Management Training Tool using a Neural Network

    Directory of Open Access Journals (Sweden)

    Helen Donelan

    2005-10-01

    Full Text Available A novel method for the analysis and interpretation of data that describes the interaction between trainee network managers and a network management training tool is presented. A simulation based approach is currently being used to train network managers, through the use of a simulated network. The motivation is to provide a tool for exposing trainees to a life like situation without disrupting a live network. The data logged by this system describes the detailed interaction between trainee network manager and simulated network. The work presented here provides an analysis of this interaction data that enables an assessment of the capabilities of the trainee network manager as well as an understanding of how the network management tasks are being approached. A neural network architecture is implemented in order to perform an exploratory data analysis of the interaction data. The neural network employs a novel form of continuous self-organisation to discover key features in the data and thus provide new insights into the learning and teaching strategies employed.

  10. Implementing Signature Neural Networks with Spiking Neurons.

    Science.gov (United States)

    Carrillo-Medina, José Luis; Latorre, Roberto

    2016-01-01

    Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm-i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data-to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the absence

  11. Implementing Signature Neural Networks with Spiking Neurons

    Science.gov (United States)

    Carrillo-Medina, José Luis; Latorre, Roberto

    2016-01-01

    Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm—i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data—to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the

  12. Network Traffic Prediction based on Particle Swarm BP Neural Network

    Directory of Open Access Journals (Sweden)

    Yan Zhu

    2013-11-01

    Full Text Available The traditional BP neural network algorithm has some bugs such that it is easy to fall into local minimum and the slow convergence speed. Particle swarm optimization is an evolutionary computation technology based on swarm intelligence which can not guarantee global convergence. Artificial Bee Colony algorithm is a global optimum algorithm with many advantages such as simple, convenient and strong robust. In this paper, a new BP neural network based on Artificial Bee Colony algorithm and particle swarm optimization algorithm is proposed to optimize the weight and threshold value of BP neural network. After network traffic prediction experiment, we can conclude that optimized BP network traffic prediction based on PSO-ABC has high prediction accuracy and has stable prediction performance.

  13. Training Deep Spiking Neural Networks Using Backpropagation.

    Science.gov (United States)

    Lee, Jun Haeng; Delbruck, Tobi; Pfeiffer, Michael

    2016-01-01

    Deep spiking neural networks (SNNs) hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise. This enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membrane potentials. Compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statistics of spikes more precisely. We evaluate the proposed framework on artificially generated events from the original MNIST handwritten digit benchmark, and also on the N-MNIST benchmark recorded with an event-based dynamic vision sensor, in which the proposed method reduces the error rate by a factor of more than three compared to the best previous SNN, and also achieves a higher accuracy than a conventional convolutional neural network (CNN) trained and tested on the same data. We demonstrate in the context of the MNIST task that thanks to their event-driven operation, deep SNNs (both fully connected and convolutional) trained with our method achieve accuracy equivalent with conventional neural networks. In the N-MNIST example, equivalent accuracy is achieved with about five times fewer computational operations.

  14. Foreign currency rate forecasting using neural networks

    Science.gov (United States)

    Pandya, Abhijit S.; Kondo, Tadashi; Talati, Amit; Jayadevappa, Suryaprasad

    2000-03-01

    Neural networks are increasingly being used as a forecasting tool in many forecasting problems. This paper discusses the application of neural networks in predicting daily foreign exchange rates between the USD, GBP as well as DEM. We approach the problem from a time-series analysis framework - where future exchange rates are forecasted solely using past exchange rates. This relies on the belief that the past prices and future prices are very close related, and interdependent. We present the result of training a neural network with historical USD-GBP data. The methodology used in explained, as well as the training process. We discuss the selection of inputs to the network, and present a comparison of using the actual exchange rates and the exchange rate differences as inputs. Price and rate differences are the preferred way of training neural network in financial applications. Results of both approaches are present together for comparison. We show that the network is able to learn the trends in the exchange rate movements correctly, and present the results of the prediction over several periods of time.

  15. Training Deep Spiking Neural Networks using Backpropagation

    Directory of Open Access Journals (Sweden)

    Jun Haeng Lee

    2016-11-01

    Full Text Available Deep spiking neural networks (SNNs hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise. This enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membrane potentials. Compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statistics of spikes more precisely. We evaluate the proposed framework on artificially generated events from the original MNIST handwritten digit benchmark, and also on the N-MNIST benchmark recorded with an event-based dynamic vision sensor, in which the proposed method reduces the error rate by a factor of more than three compared to the best previous SNN, and also achieves a higher accuracy than a conventional convolutional neural network (CNN trained and tested on the same data. We demonstrate in the context of the MNIST task that thanks to their event-driven operation, deep SNNs (both fully connected and convolutional trained with our method achieve accuracy equivalent with conventional neural networks. In the N-MNIST example, equivalent accuracy is achieved with about five times fewer computational operations.

  16. Kannada character recognition system using neural network

    Science.gov (United States)

    Kumar, Suresh D. S.; Kamalapuram, Srinivasa K.; Kumar, Ajay B. R.

    2013-03-01

    Handwriting recognition has been one of the active and challenging research areas in the field of pattern recognition. It has numerous applications which include, reading aid for blind, bank cheques and conversion of any hand written document into structural text form. As there is no sufficient number of works on Indian language character recognition especially Kannada script among 15 major scripts in India. In this paper an attempt is made to recognize handwritten Kannada characters using Feed Forward neural networks. A handwritten Kannada character is resized into 20x30 Pixel. The resized character is used for training the neural network. Once the training process is completed the same character is given as input to the neural network with different set of neurons in hidden layer and their recognition accuracy rate for different Kannada characters has been calculated and compared. The results show that the proposed system yields good recognition accuracy rates comparable to that of other handwritten character recognition systems.

  17. Assessing Landslide Hazard Using Artificial Neural Network

    DEFF Research Database (Denmark)

    Farrokhzad, Farzad; Choobbasti, Asskar Janalizadeh; Barari, Amin

    2011-01-01

    neural network has been developed for use in the stability evaluation of slopes under various geological conditions and engineering requirements. The Artificial neural network model of this research uses slope characteristics as input and leads to the output in form of the probability of failure...... and factor of safety. It can be stated that the trained neural networks are capable of predicting the stability of slopes and safety factor of landslide hazard in study area with an acceptable level of confidence. Landslide hazard analysis and mapping can provide useful information for catastrophic loss...... failure" which is main concentration of the current research and "liquefaction failure". Shear failures along shear planes occur when the shear stress along the sliding surfaces exceed the effective shear strength. These slides have been referred to as landslide. An expert system based on artificial...

  18. Recurrent Neural Network for Computing Outer Inverse.

    Science.gov (United States)

    Živković, Ivan S; Stanimirović, Predrag S; Wei, Yimin

    2016-05-01

    Two linear recurrent neural networks for generating outer inverses with prescribed range and null space are defined. Each of the proposed recurrent neural networks is based on the matrix-valued differential equation, a generalization of dynamic equations proposed earlier for the nonsingular matrix inversion, the Moore-Penrose inversion, as well as the Drazin inversion, under the condition of zero initial state. The application of the first approach is conditioned by the properties of the spectrum of a certain matrix; the second approach eliminates this drawback, though at the cost of increasing the number of matrix operations. The cases corresponding to the most common generalized inverses are defined. The conditions that ensure stability of the proposed neural network are presented. Illustrative examples present the results of numerical simulations.

  19. Classification of radar clutter using neural networks.

    Science.gov (United States)

    Haykin, S; Deng, C

    1991-01-01

    A classifier that incorporates both preprocessing and postprocessing procedures as well as a multilayer feedforward network (based on the back-propagation algorithm) in its design to distinguish between several major classes of radar returns including weather, birds, and aircraft is described. The classifier achieves an average classification accuracy of 89% on generalization for data collected during a single scan of the radar antenna. The procedures of feature selection for neural network training, the classifier design considerations, the learning algorithm development, the implementation, and the experimental results of the neural clutter classifier, which is simulated on a Warp systolic computer, are discussed. A comparative evaluation of the multilayer neural network with a traditional Bayes classifier is presented.

  20. Intelligent Surveillance Robot with Obstacle Avoidance Capabilities Using Neural Network.

    Science.gov (United States)

    Budiharto, Widodo

    2015-01-01

    For specific purpose, vision-based surveillance robot that can be run autonomously and able to acquire images from its dynamic environment is very important, for example, in rescuing disaster victims in Indonesia. In this paper, we propose architecture for intelligent surveillance robot that is able to avoid obstacles using 3 ultrasonic distance sensors based on backpropagation neural network and a camera for face recognition. 2.4 GHz transmitter for transmitting video is used by the operator/user to direct the robot to the desired area. Results show the effectiveness of our method and we evaluate the performance of the system.

  1. Convolutional neural networks for synthetic aperture radar classification

    Science.gov (United States)

    Profeta, Andrew; Rodriguez, Andres; Clouse, H. Scott

    2016-05-01

    For electro-optical object recognition, convolutional neural networks (CNNs) are the state-of-the-art. For large datasets, CNNs are able to learn meaningful features used for classification. However, their application to synthetic aperture radar (SAR) has been limited. In this work we experimented with various CNN architectures on the MSTAR SAR dataset. As the input to the CNN we used the magnitude and phase (2 channels) of the SAR imagery. We used the deep learning toolboxes CAFFE and Torch7. Our results show that we can achieve 93% accuracy on the MSTAR dataset using CNNs.

  2. A Neural Network Model for Forecasting CO2 Emission

    Directory of Open Access Journals (Sweden)

    C. Gallo

    2014-06-01

    Full Text Available Air pollution is today a serious problem, caused mainly by human activity. Classical methods are not considered able to efficiently model complex phenomena as meteorology and air pollution because, usually, they make approximations or too rigid schematisations. Our purpose is a more flexible architecture (artificial neural network model to implement a short-term CO2 emission forecasting tool applied to the cereal sector in Apulia region – in Southern Italy - to determine how the introduction of cultural methods with less environmental impact acts on a possible pollution reduction.

  3. NEURAL NETWORK SYSTEM FOR DIAGNOSTICS OF AVIATION DESIGNATION PRODUCTS

    Directory of Open Access Journals (Sweden)

    В. Єременко

    2011-02-01

    Full Text Available In the article for solving the classification problem of the technical state of the  object, proposed to use a hybrid neural network with a Kohonen layer and multilayer perceptron. The information-measuring system can be used for standardless diagnostics, cluster analysis and to classify the products which made from composite materials. The advantage of this architecture is flexibility, high performance, ability to use different methods for collecting diagnostic information about unit under test, high reliability of information processing

  4. Vertex Reconstructing Neural Network at the ZEUS Central Tracking Detector

    CERN Document Server

    Dror, G; Dror, Gideon; Etzion, Erez

    2001-01-01

    An unconventional solution for finding the location of event creation is presented. It is based on two feed-forward neural networks with fixed architecture, whose parameters are chosen so as to reach a high accuracy. The interaction point location is a parameter that can be used to select events of interest from the very high rate of events created at the current experiments in High Energy Physics. The system suggested here is tested on simulated data sets of the ZEUS Central Tracking Detector, and is shown to perform better than conventional algorithms.

  5. Digital pathology annotation data for improved deep neural network classification

    Science.gov (United States)

    Kim, Edward; Mente, Sai Lakshmi Deepika; Keenan, Andrew; Gehlot, Vijay

    2017-03-01

    In the field of digital pathology, there is an explosive amount of imaging data being generated. Thus, there is an ever growing need to create assistive or automatic methods to analyze collections of images for screening and classification. Machine learning, specifically deep learning algorithms, developed for digital pathology have the potential to assist in this way. Deep learning architectures have demonstrated great success over existing classification models but require massive amounts of labeled training data that either doesn't exist or are cost and time prohibitive to obtain. In this project, we present a framework for representing, collecting, validating, and utilizing cytopathology features for improved neural network classification.

  6. Low-Dose CT via Deep Neural Network

    CERN Document Server

    Chen, Hu; Zhang, Weihua; Liao, Peixi; Li, Ke; Zhou, Jiliu; Wang, Ge

    2016-01-01

    In order to reduce the potential radiation risk, low-dose CT has attracted more and more attention. However, simply lowering the radiation dose will significantly degrade the imaging quality. In this paper, we propose a noise reduction method for low-dose CT via deep learning without accessing the original projection data. An architecture of deep convolutional neural network was considered to map the low-dose CT images into its corresponding normal-dose CT images patch by patch. Qualitative and quantitative evaluations demonstrate a state-the-art performance of the proposed method.

  7. Prediction horizon effects on stochastic modelling hints for neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Drossu, R.; Obradovic, Z. [Washington State Univ., Pullman, WA (United States)

    1995-12-31

    The objective of this paper is to investigate the relationship between stochastic models and neural network (NN) approaches to time series modelling. Experiments on a complex real life prediction problem (entertainment video traffic) indicate that prior knowledge can be obtained through stochastic analysis both with respect to an appropriate NN architecture as well as to an appropriate sampling rate, in the case of a prediction horizon larger than one. An improvement of the obtained NN predictor is also proposed through a bias removal post-processing, resulting in much better performance than the best stochastic model.

  8. Cotton genotypes selection through artificial neural networks.

    Science.gov (United States)

    Júnior, E G Silva; Cardoso, D B O; Reis, M C; Nascimento, A F O; Bortolin, D I; Martins, M R; Sousa, L B

    2017-09-27

    Breeding programs currently use statistical analysis to assist in the identification of superior genotypes at various stages of a cultivar's development. Differently from these analyses, the computational intelligence approach has been little explored in genetic improvement of cotton. Thus, this study was carried out with the objective of presenting the use of artificial neural networks as auxiliary tools in the improvement of the cotton to improve fiber quality. To demonstrate the applicability of this approach, this research was carried out using the evaluation data of 40 genotypes. In order to classify the genotypes for fiber quality, the artificial neural networks were trained with replicate data of 20 genotypes of cotton evaluated in the harvests of 2013/14 and 2014/15, regarding fiber length, uniformity of length, fiber strength, micronaire index, elongation, short fiber index, maturity index, reflectance degree, and fiber quality index. This quality index was estimated by means of a weighted average on the determined score (1 to 5) of each characteristic of the HVI evaluated, according to its industry standards. The artificial neural networks presented a high capacity of correct classification of the 20 selected genotypes based on the fiber quality index, so that when using fiber length associated with the short fiber index, fiber maturation, and micronaire index, the artificial neural networks presented better results than using only fiber length and previous associations. It was also observed that to submit data of means of new genotypes to the neural networks trained with data of repetition, provides better results of classification of the genotypes. When observing the results obtained in the present study, it was verified that the artificial neural networks present great potential to be used in the different stages of a genetic improvement program of the cotton, aiming at the improvement of the fiber quality of the future cultivars.

  9. Neural networks and particle physics

    CERN Document Server

    Peterson, Carsten

    1993-01-01

    1. Introduction : Structure of the Central Nervous System Generics2. Feed-forward networks, Perceptions, Function approximators3. Self-organisation, Feature Maps4. Feed-back Networks, The Hopfield model, Optimization problems, Feed-back, Networks, Deformable templates, Graph bisection

  10. Cloud Radio Access Network architecture. Towards 5G mobile networks

    DEFF Research Database (Denmark)

    Checko, Aleksandra

    rate in the fronthaul. For the analyzed data sets, in deployments where diverse traffic types are mixed (bursty, e.g., web browsing and constant bit rate, e.g., video streaming) and cells from various geographical areas (e.g., office and residential) are connected to the BBU pool, the multiplexing gain......Cloud Radio Access Network (C-RAN) is a novel mobile network architecture which can address a number of challenges that mobile operators face while trying to support ever-growing end-users’ needs towards 5th generation of mobile networks (5G). The main idea behind C-RAN is to split the base...... as to design the socalled fronthaul network, interconnecting those parts. This thesis focuses on quantifying those benefits and proposing a flexible and capacity-optimized fronthaul network. It is shown that a C-RAN with a functional split resulting in a variable bit rate on the fronthaul links brings cost...

  11. A novel hybrid-maximum neural network in stereo-matching process.

    Science.gov (United States)

    Laskowski, Lukasz

    2013-01-01

    In the present paper, the completely innovative architecture of artificial neural network based on Hopfield structure for solving a stereo-matching problem-hybrid neural network, consisting of the classical analog Hopfield neural network and the Maximum Neural Network-is described. The application of this kind of structure as a part of assistive device for visually impaired individuals is considered. The role of the analog Hopfield network is to find the attraction area of the global minimum, whereas Maximum Neural Network is finding accurate location of this minimum. The network presented here is characterized by an extremely high rate of work performance with the same accuracy as a classical Hopfield-like network, which makes it possible to use this kind of structure as a part of systems working in real time. The network considered here underwent experimental tests with the use of real stereo pictures as well as simulated stereo images. This enables error calculation and direct comparison with the classic analog Hopfield neural network as well as other networks proposed in the literature.

  12. Gap Filling of Daily Sea Levels by Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Lyubka Pashova

    2013-06-01

    Full Text Available In the recent years, intelligent methods as artificial neural networks are successfully applied for data analysis from different fields of the geosciences. One of the encountered practical problems is the availability of gaps in the time series that prevent their comprehensive usage for the scientific and practical purposes. The article briefly describes two types of the artificial neural network (ANN architectures - Feed-Forward Backpropagation (FFBP and recurrent Echo state network (ESN. In some cases, the ANN can be used as an alternative on the traditional methods, to fill in missing values in the time series. We have been conducted several experiments to fill the missing values of daily sea levels spanning a 5-years period using both ANN architectures. A multiple linear regression for the same purpose has been also applied. The sea level data are derived from the records of the tide gauge Burgas, which is located on the western Black Sea coast. The achieved results have shown that the performance of ANN models is better than that of the classical one and they are very promising for the real-time interpolation of missing data in the time series.

  13. Implementation aspects of Graph Neural Networks

    Science.gov (United States)

    Barcz, A.; Szymański, Z.; Jankowski, S.

    2013-10-01

    This article summarises the results of implementation of a Graph Neural Network classi er. The Graph Neural Network model is a connectionist model, capable of processing various types of structured data, including non- positional and cyclic graphs. In order to operate correctly, the GNN model must implement a transition function being a contraction map, which is assured by imposing a penalty on model weights. This article presents research results concerning the impact of the penalty parameter on the model training process and the practical decisions that were made during the GNN implementation process.

  14. Spectral classification using convolutional neural networks

    CERN Document Server

    Hála, Pavel

    2014-01-01

    There is a great need for accurate and autonomous spectral classification methods in astrophysics. This thesis is about training a convolutional neural network (ConvNet) to recognize an object class (quasar, star or galaxy) from one-dimension spectra only. Author developed several scripts and C programs for datasets preparation, preprocessing and postprocessing of the data. EBLearn library (developed by Pierre Sermanet and Yann LeCun) was used to create ConvNets. Application on dataset of more than 60000 spectra yielded success rate of nearly 95%. This thesis conclusively proved great potential of convolutional neural networks and deep learning methods in astrophysics.

  15. Neural networks advances and applications 2

    CERN Document Server

    Gelenbe, E

    1992-01-01

    The present volume is a natural follow-up to Neural Networks: Advances and Applications which appeared one year previously. As the title indicates, it combines the presentation of recent methodological results concerning computational models and results inspired by neural networks, and of well-documented applications which illustrate the use of such models in the solution of difficult problems. The volume is balanced with respect to these two orientations: it contains six papers concerning methodological developments and five papers concerning applications and examples illustrating the theoret

  16. SAR ATR Based on Convolutional Neural Network

    Directory of Open Access Journals (Sweden)

    Tian Zhuangzhuang

    2016-06-01

    Full Text Available This study presents a new method of Synthetic Aperture Radar (SAR image target recognition based on a convolutional neural network. First, we introduce a class separability measure into the cost function to improve this network’s ability to distinguish between categories. Then, we extract SAR image features using the improved convolutional neural network and classify these features using a support vector machine. Experimental results using moving and stationary target acquisition and recognition SAR datasets prove the validity of this method.

  17. Contractor Prequalification Based on Neural Networks

    Institute of Scientific and Technical Information of China (English)

    ZHANG Jin-long; YANG Lan-rong

    2002-01-01

    Contractor Prequalification involves the screening of contractors by a project owner, according to a given set of criteria, in order to determine their competence to perform the work if awarded the construction contract. This paper introduces the capabilities of neural networks in solving problems related to contractor prequalification. The neural network systems for contractor prequalification has an input vector of 8 components and an output vector of 1 component. The output vector represents whether a contractor is qualified or not qualified to submit a bid on a project.

  18. Simulation of photosynthetic production using neural network

    Science.gov (United States)

    Kmet, Tibor; Kmetova, Maria

    2013-10-01

    This paper deals with neural network based optimal control synthesis for solving optimal control problems with control and state constraints and discrete time delay. The optimal control problem is transcribed into nonlinear programming problem which is implemented with adaptive critic neural network. This approach is applicable to a wide class of nonlinear systems. The proposed simulation methods is illustrated by the optimal control problem of photosynthetic production described by discrete time delay differential equations. Results show that adaptive critic based systematic approach holds promise for obtaining the optimal control with control and state constraints.

  19. Top tagging with deep neural networks [Vidyo

    CERN Document Server

    CERN. Geneva

    2017-01-01

    Recent literature on deep neural networks for top tagging has focussed on image based techniques or multivariate approaches using high level jet substructure variables. Here, we take a sequential approach to this task by using anordered sequence of energy deposits as training inputs. Unlike previous approaches, this strategy does not result in a loss of information during pixelization or the calculation of high level features. We also propose new preprocessing methods that do not alter key physical quantities such as jet mass. We compare the performance of this approach to standard tagging techniques and present results evaluating the robustness of the neural network to pileup.

  20. Intelligent neural network classifier for automatic testing

    Science.gov (United States)

    Bai, Baoxing; Yu, Heping

    1996-10-01

    This paper is concerned with an application of a multilayer feedforward neural network for the vision detection of industrial pictures, and introduces a high characteristics image processing and recognizing system which can be used for real-time testing blemishes, streaks and cracks, etc. on the inner walls of high-accuracy pipes. To take full advantage of the functions of the artificial neural network, such as the information distributed memory, large scale self-adapting parallel processing, high fault-tolerance ability, this system uses a multilayer perceptron as a regular detector to extract features of the images to be inspected and classify them.