WorldWideScience

Sample records for neural architecture underlying

  1. Neural Architectures for Control

    Science.gov (United States)

    Peterson, James K.

    1991-01-01

    The cerebellar model articulated controller (CMAC) neural architectures are shown to be viable for the purposes of real-time learning and control. Software tools for the exploration of CMAC performance are developed for three hardware platforms, the MacIntosh, the IBM PC, and the SUN workstation. All algorithm development was done using the C programming language. These software tools were then used to implement an adaptive critic neuro-control design that learns in real-time how to back up a trailer truck. The truck backer-upper experiment is a standard performance measure in the neural network literature, but previously the training of the controllers was done off-line. With the CMAC neural architectures, it was possible to train the neuro-controllers on-line in real-time on a MS-DOS PC 386. CMAC neural architectures are also used in conjunction with a hierarchical planning approach to find collision-free paths over 2-D analog valued obstacle fields. The method constructs a coarse resolution version of the original problem and then finds the corresponding coarse optimal path using multipass dynamic programming. CMAC artificial neural architectures are used to estimate the analog transition costs that dynamic programming requires. The CMAC architectures are trained in real-time for each obstacle field presented. The coarse optimal path is then used as a baseline for the construction of a fine scale optimal path through the original obstacle array. These results are a very good indication of the potential power of the neural architectures in control design. In order to reach as wide an audience as possible, we have run a seminar on neuro-control that has met once per week since 20 May 1991. This seminar has thoroughly discussed the CMAC architecture, relevant portions of classical control, back propagation through time, and adaptive critic designs.

  2. Comparison of different artificial neural network architectures in modeling of Chlorella sp. flocculation.

    Science.gov (United States)

    Zenooz, Alireza Moosavi; Ashtiani, Farzin Zokaee; Ranjbar, Reza; Nikbakht, Fatemeh; Bolouri, Oberon

    2017-07-03

    Biodiesel production from microalgae feedstock should be performed after growth and harvesting of the cells, and the most feasible method for harvesting and dewatering of microalgae is flocculation. Flocculation modeling can be used for evaluation and prediction of its performance under different affective parameters. However, the modeling of flocculation in microalgae is not simple and has not performed yet, under all experimental conditions, mostly due to different behaviors of microalgae cells during the process under different flocculation conditions. In the current study, the modeling of microalgae flocculation is studied with different neural network architectures. Microalgae species, Chlorella sp., was flocculated with ferric chloride under different conditions and then the experimental data modeled using artificial neural network. Neural network architectures of multilayer perceptron (MLP) and radial basis function architectures, failed to predict the targets successfully, though, modeling was effective with ensemble architecture of MLP networks. Comparison between the performances of the ensemble and each individual network explains the ability of the ensemble architecture in microalgae flocculation modeling.

  3. Neural codes of seeing architectural styles.

    Science.gov (United States)

    Choo, Heeyoung; Nasar, Jack L; Nikrahei, Bardia; Walther, Dirk B

    2017-01-10

    Images of iconic buildings, such as the CN Tower, instantly transport us to specific places, such as Toronto. Despite the substantial impact of architectural design on people's visual experience of built environments, we know little about its neural representation in the human brain. In the present study, we have found patterns of neural activity associated with specific architectural styles in several high-level visual brain regions, but not in primary visual cortex (V1). This finding suggests that the neural correlates of the visual perception of architectural styles stem from style-specific complex visual structure beyond the simple features computed in V1. Surprisingly, the network of brain regions representing architectural styles included the fusiform face area (FFA) in addition to several scene-selective regions. Hierarchical clustering of error patterns further revealed that the FFA participated to a much larger extent in the neural encoding of architectural styles than entry-level scene categories. We conclude that the FFA is involved in fine-grained neural encoding of scenes at a subordinate-level, in our case, architectural styles of buildings. This study for the first time shows how the human visual system encodes visual aspects of architecture, one of the predominant and longest-lasting artefacts of human culture.

  4. Neural architecture underlying classification of face perception paradigms.

    Science.gov (United States)

    Laird, Angela R; Riedel, Michael C; Sutherland, Matthew T; Eickhoff, Simon B; Ray, Kimberly L; Uecker, Angela M; Fox, P Mickle; Turner, Jessica A; Fox, Peter T

    2015-10-01

    We present a novel strategy for deriving a classification system of functional neuroimaging paradigms that relies on hierarchical clustering of experiments archived in the BrainMap database. The goal of our proof-of-concept application was to examine the underlying neural architecture of the face perception literature from a meta-analytic perspective, as these studies include a wide range of tasks. Task-based results exhibiting similar activation patterns were grouped as similar, while tasks activating different brain networks were classified as functionally distinct. We identified four sub-classes of face tasks: (1) Visuospatial Attention and Visuomotor Coordination to Faces, (2) Perception and Recognition of Faces, (3) Social Processing and Episodic Recall of Faces, and (4) Face Naming and Lexical Retrieval. Interpretation of these sub-classes supports an extension of a well-known model of face perception to include a core system for visual analysis and extended systems for personal information, emotion, and salience processing. Overall, these results demonstrate that a large-scale data mining approach can inform the evolution of theoretical cognitive models by probing the range of behavioral manipulations across experimental tasks. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. An Evolutionary Optimization Framework for Neural Networks and Neuromorphic Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Schuman, Catherine D [ORNL; Plank, James [University of Tennessee (UT); Disney, Adam [University of Tennessee (UT); Reynolds, John [University of Tennessee (UT)

    2016-01-01

    As new neural network and neuromorphic architectures are being developed, new training methods that operate within the constraints of the new architectures are required. Evolutionary optimization (EO) is a convenient training method for new architectures. In this work, we review a spiking neural network architecture and a neuromorphic architecture, and we describe an EO training framework for these architectures. We present the results of this training framework on four classification data sets and compare those results to other neural network and neuromorphic implementations. We also discuss how this EO framework may be extended to other architectures.

  6. Neural codes of seeing architectural styles

    OpenAIRE

    Choo, Heeyoung; Nasar, Jack L.; Nikrahei, Bardia; Walther, Dirk B.

    2017-01-01

    Images of iconic buildings, such as the CN Tower, instantly transport us to specific places, such as Toronto. Despite the substantial impact of architectural design on people′s visual experience of built environments, we know little about its neural representation in the human brain. In the present study, we have found patterns of neural activity associated with specific architectural styles in several high-level visual brain regions, but not in primary visual cortex (V1). This finding sugges...

  7. Optical Neural Network Classifier Architectures

    National Research Council Canada - National Science Library

    Getbehead, Mark

    1998-01-01

    We present an adaptive opto-electronic neural network hardware architecture capable of exploiting parallel optics to realize real-time processing and classification of high-dimensional data for Air...

  8. Emulation of Neural Networks on a Nanoscale Architecture

    International Nuclear Information System (INIS)

    Eshaghian-Wilner, Mary M; Friesz, Aaron; Khitun, Alex; Navab, Shiva; Parker, Alice C; Wang, Kang L; Zhou, Chongwu

    2007-01-01

    In this paper, we propose using a nanoscale spin-wave-based architecture for implementing neural networks. We show that this architecture can efficiently realize highly interconnected neural network models such as the Hopfield model. In our proposed architecture, no point-to-point interconnection is required, so unlike standard VLSI design, no fan-in/fan-out constraint limits the interconnectivity. Using spin-waves, each neuron could broadcast to all other neurons simultaneously and similarly a neuron could concurrently receive and process multiple data. Therefore in this architecture, the total weighted sum to each neuron can be computed by the sum of the values from all the incoming waves to that neuron. In addition, using the superposition property of waves, this computation can be done in O(1) time, and neurons can update their states quite rapidly

  9. Efficient universal computing architectures for decoding neural activity.

    Directory of Open Access Journals (Sweden)

    Benjamin I Rapoport

    Full Text Available The ability to decode neural activity into meaningful control signals for prosthetic devices is critical to the development of clinically useful brain- machine interfaces (BMIs. Such systems require input from tens to hundreds of brain-implanted recording electrodes in order to deliver robust and accurate performance; in serving that primary function they should also minimize power dissipation in order to avoid damaging neural tissue; and they should transmit data wirelessly in order to minimize the risk of infection associated with chronic, transcutaneous implants. Electronic architectures for brain- machine interfaces must therefore minimize size and power consumption, while maximizing the ability to compress data to be transmitted over limited-bandwidth wireless channels. Here we present a system of extremely low computational complexity, designed for real-time decoding of neural signals, and suited for highly scalable implantable systems. Our programmable architecture is an explicit implementation of a universal computing machine emulating the dynamics of a network of integrate-and-fire neurons; it requires no arithmetic operations except for counting, and decodes neural signals using only computationally inexpensive logic operations. The simplicity of this architecture does not compromise its ability to compress raw neural data by factors greater than [Formula: see text]. We describe a set of decoding algorithms based on this computational architecture, one designed to operate within an implanted system, minimizing its power consumption and data transmission bandwidth; and a complementary set of algorithms for learning, programming the decoder, and postprocessing the decoded output, designed to operate in an external, nonimplanted unit. The implementation of the implantable portion is estimated to require fewer than 5000 operations per second. A proof-of-concept, 32-channel field-programmable gate array (FPGA implementation of this portion

  10. Modular Neural Tile Architecture for Compact Embedded Hardware Spiking Neural Network

    NARCIS (Netherlands)

    Pande, Sandeep; Morgan, Fearghal; Cawley, Seamus; Bruintjes, Tom; Smit, Gerardus Johannes Maria; McGinley, Brian; Carrillo, Snaider; Harkin, Jim; McDaid, Liam

    2013-01-01

    Biologically-inspired packet switched network on chip (NoC) based hardware spiking neural network (SNN) architectures have been proposed as an embedded computing platform for classification, estimation and control applications. Storage of large synaptic connectivity (SNN topology) information in

  11. Photosensitive-polyimide based method for fabricating various neural electrode architectures

    Directory of Open Access Journals (Sweden)

    Yasuhiro X Kato

    2012-06-01

    Full Text Available An extensive photosensitive polyimide (PSPI-based method for designing and fabricating various neural electrode architectures was developed. The method aims to broaden the design flexibility and expand the fabrication capability for neural electrodes to improve the quality of recorded signals and integrate other functions. After characterizing PSPI’s properties for micromachining processes, we successfully designed and fabricated various neural electrodes even on a non-flat substrate using only one PSPI as an insulation material and without the time-consuming dry etching processes. The fabricated neural electrodes were an electrocorticogram electrode, a mesh intracortical electrode with a unique lattice-like mesh structure to fixate neural tissue, and a guide cannula electrode with recording microelectrodes placed on the curved surface of a guide cannula as a microdialysis probe. In vivo neural recordings using anesthetized rats demonstrated that these electrodes can be used to record neural activities repeatedly without any breakage and mechanical failures, which potentially promises stable recordings for long periods of time. These successes make us believe that this PSPI-based fabrication is a powerful method, permitting flexible design and easy optimization of electrode architectures for a variety of electrophysiological experimental research with improved neural recording performance.

  12. Quantum perceptron over a field and neural network architecture selection in a quantum computer.

    Science.gov (United States)

    da Silva, Adenilton José; Ludermir, Teresa Bernarda; de Oliveira, Wilson Rosa

    2016-04-01

    In this work, we propose a quantum neural network named quantum perceptron over a field (QPF). Quantum computers are not yet a reality and the models and algorithms proposed in this work cannot be simulated in actual (or classical) computers. QPF is a direct generalization of a classical perceptron and solves some drawbacks found in previous models of quantum perceptrons. We also present a learning algorithm named Superposition based Architecture Learning algorithm (SAL) that optimizes the neural network weights and architectures. SAL searches for the best architecture in a finite set of neural network architectures with linear time over the number of patterns in the training set. SAL is the first learning algorithm to determine neural network architectures in polynomial time. This speedup is obtained by the use of quantum parallelism and a non-linear quantum operator. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. A comparison of neural network architectures for the prediction of MRR in EDM

    Science.gov (United States)

    Jena, A. R.; Das, Raja

    2017-11-01

    The aim of the research work is to predict the material removal rate of a work-piece in electrical discharge machining (EDM). Here, an effort has been made to predict the material removal rate through back-propagation neural network (BPN) and radial basis function neural network (RBFN) for a work-piece of AISI D2 steel. The input parameters for the architecture are discharge-current (Ip), pulse-duration (Ton), and duty-cycle (τ) taken for consideration to obtained the output for material removal rate of the work-piece. In the architecture, it has been observed that radial basis function neural network is comparatively faster than back-propagation neural network but logically back-propagation neural network results more real value. Therefore BPN may consider as a better process in this architecture for consistent prediction to save time and money for conducting experiments.

  14. Optimum Neural Network Architecture for Precipitation Prediction of Myanmar

    OpenAIRE

    Khaing Win Mar; Thinn Thu Naing

    2008-01-01

    Nowadays, precipitation prediction is required for proper planning and management of water resources. Prediction with neural network models has received increasing interest in various research and application domains. However, it is difficult to determine the best neural network architecture for prediction since it is not immediately obvious how many input or hidden nodes are used in the model. In this paper, neural network model is used as a forecasting tool. The major aim is to evaluate a s...

  15. SELECTING NEURAL NETWORK ARCHITECTURE FOR INVESTMENT PROFITABILITY PREDICTIONS

    Directory of Open Access Journals (Sweden)

    Marijana Zekić-Sušac

    2012-07-01

    Full Text Available After production and operations, finance and investments are one of the mostfrequent areas of neural network applications in business. The lack of standardizedparadigms that can determine the efficiency of certain NN architectures in a particularproblem domain is still present. The selection of NN architecture needs to take intoconsideration the type of the problem, the nature of the data in the model, as well as somestrategies based on result comparison. The paper describes previous research in that areaand suggests a forward strategy for selecting best NN algorithm and structure. Since thestrategy includes both parameter-based and variable-based testings, it can be used forselecting NN architectures as well as for extracting models. The backpropagation, radialbasis,modular, LVQ and probabilistic neural network algorithms were used on twoindependent sets: stock market and credit scoring data. The results show that neuralnetworks give better accuracy comparing to multiple regression and logistic regressionmodels. Since it is model-independant, the strategy can be used by researchers andprofessionals in other areas of application.

  16. Neurally and mathematically motivated architecture for language and thought.

    Science.gov (United States)

    Perlovsky, L I; Ilin, R

    2010-01-01

    Neural structures of interaction between thinking and language are unknown. This paper suggests a possible architecture motivated by neural and mathematical considerations. A mathematical requirement of computability imposes significant constraints on possible architectures consistent with brain neural structure and with a wealth of psychological knowledge. How language interacts with cognition. Do we think with words, or is thinking independent from language with words being just labels for decisions? Why is language learned by the age of 5 or 7, but acquisition of knowledge represented by learning to use this language knowledge takes a lifetime? This paper discusses hierarchical aspects of language and thought and argues that high level abstract thinking is impossible without language. We discuss a mathematical technique that can model the joint language-thought architecture, while overcoming previously encountered difficulties of computability. This architecture explains a contradiction between human ability for rational thoughtful decisions and irrationality of human thinking revealed by Tversky and Kahneman; a crucial role in this contradiction might be played by language. The proposed model resolves long-standing issues: how the brain learns correct words-object associations; why animals do not talk and think like people. We propose the role played by language emotionality in its interaction with thought. We relate the mathematical model to Humboldt's "firmness" of languages; and discuss possible influence of language grammar on its emotionality. Psychological and brain imaging experiments related to the proposed model are discussed. Future theoretical and experimental research is outlined.

  17. Marginally Stable Triangular Recurrent Neural Network Architecture for Time Series Prediction.

    Science.gov (United States)

    Sivakumar, Seshadri; Sivakumar, Shyamala

    2017-09-25

    This paper introduces a discrete-time recurrent neural network architecture using triangular feedback weight matrices that allows a simplified approach to ensuring network and training stability. The triangular structure of the weight matrices is exploited to readily ensure that the eigenvalues of the feedback weight matrix represented by the block diagonal elements lie on the unit circle in the complex z-plane by updating these weights based on the differential of the angular error variable. Such placement of the eigenvalues together with the extended close interaction between state variables facilitated by the nondiagonal triangular elements, enhances the learning ability of the proposed architecture. Simulation results show that the proposed architecture is highly effective in time-series prediction tasks associated with nonlinear and chaotic dynamic systems with underlying oscillatory modes. This modular architecture with dual upper and lower triangular feedback weight matrices mimics fully recurrent network architectures, while maintaining learning stability with a simplified training process. While training, the block-diagonal weights (hence the eigenvalues) of the dual triangular matrices are constrained to the same values during weight updates aimed at minimizing the possibility of overfitting. The dual triangular architecture also exploits the benefit of parsing the input and selectively applying the parsed inputs to the two subnetworks to facilitate enhanced learning performance.

  18. Stable architectures for deep neural networks

    Science.gov (United States)

    Haber, Eldad; Ruthotto, Lars

    2018-01-01

    Deep neural networks have become invaluable tools for supervised machine learning, e.g. classification of text or images. While often offering superior results over traditional techniques and successfully expressing complicated patterns in data, deep architectures are known to be challenging to design and train such that they generalize well to new data. Critical issues with deep architectures are numerical instabilities in derivative-based learning algorithms commonly called exploding or vanishing gradients. In this paper, we propose new forward propagation techniques inspired by systems of ordinary differential equations (ODE) that overcome this challenge and lead to well-posed learning problems for arbitrarily deep networks. The backbone of our approach is our interpretation of deep learning as a parameter estimation problem of nonlinear dynamical systems. Given this formulation, we analyze stability and well-posedness of deep learning and use this new understanding to develop new network architectures. We relate the exploding and vanishing gradient phenomenon to the stability of the discrete ODE and present several strategies for stabilizing deep learning for very deep networks. While our new architectures restrict the solution space, several numerical experiments show their competitiveness with state-of-the-art networks.

  19. Combinatorial structures and processing in neural blackboard architectures

    NARCIS (Netherlands)

    van der Velde, Frank; van der Velde, Frank; de Kamps, Marc; Besold, Tarek R.; d'Avila Garcez, Artur; Marcus, Gary F.; Miikkulainen, Risto

    2015-01-01

    We discuss and illustrate Neural Blackboard Architectures (NBAs) as the basis for variable binding and combinatorial processing the brain. We focus on the NBA for sentence structure. NBAs are based on the notion that conceptual representations are in situ, hence cannot be copied or transported.

  20. Convolutional neural network architectures for predicting DNA–protein binding

    Science.gov (United States)

    Zeng, Haoyang; Edwards, Matthew D.; Liu, Ge; Gifford, David K.

    2016-01-01

    Motivation: Convolutional neural networks (CNN) have outperformed conventional methods in modeling the sequence specificity of DNA–protein binding. Yet inappropriate CNN architectures can yield poorer performance than simpler models. Thus an in-depth understanding of how to match CNN architecture to a given task is needed to fully harness the power of CNNs for computational biology applications. Results: We present a systematic exploration of CNN architectures for predicting DNA sequence binding using a large compendium of transcription factor datasets. We identify the best-performing architectures by varying CNN width, depth and pooling designs. We find that adding convolutional kernels to a network is important for motif-based tasks. We show the benefits of CNNs in learning rich higher-order sequence features, such as secondary motifs and local sequence context, by comparing network performance on multiple modeling tasks ranging in difficulty. We also demonstrate how careful construction of sequence benchmark datasets, using approaches that control potentially confounding effects like positional or motif strength bias, is critical in making fair comparisons between competing methods. We explore how to establish the sufficiency of training data for these learning tasks, and we have created a flexible cloud-based framework that permits the rapid exploration of alternative neural network architectures for problems in computational biology. Availability and Implementation: All the models analyzed are available at http://cnn.csail.mit.edu. Contact: gifford@mit.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307608

  1. Neural architecture design based on extreme learning machine.

    Science.gov (United States)

    Bueno-Crespo, Andrés; García-Laencina, Pedro J; Sancho-Gómez, José-Luis

    2013-12-01

    Selection of the optimal neural architecture to solve a pattern classification problem entails to choose the relevant input units, the number of hidden neurons and its corresponding interconnection weights. This problem has been widely studied in many research works but their solutions usually involve excessive computational cost in most of the problems and they do not provide a unique solution. This paper proposes a new technique to efficiently design the MultiLayer Perceptron (MLP) architecture for classification using the Extreme Learning Machine (ELM) algorithm. The proposed method provides a high generalization capability and a unique solution for the architecture design. Moreover, the selected final network only retains those input connections that are relevant for the classification task. Experimental results show these advantages. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Dynamic Neural Fields as a Step Towards Cognitive Neuromorphic Architectures

    Directory of Open Access Journals (Sweden)

    Yulia eSandamirskaya

    2014-01-01

    Full Text Available Dynamic Field Theory (DFT is an established framework for modelling embodied cognition. In DFT, elementary cognitive functions such as memory formation, formation of grounded representations, attentional processes, decision making, adaptation, and learning emerge from neuronal dynamics. The basic computational element of this framework is a Dynamic Neural Field (DNF. Under constraints on the time-scale of the dynamics, the DNF is computationally equivalent to a soft winner-take-all (WTA network, which is considered one of the basic computational units in neuronal processing. Recently, it has been shown how a WTA network may be implemented in neuromorphic hardware, such as analogue Very Large Scale Integration (VLSI device. This paper leverages the relationship between DFT and soft WTA networks to systematically revise and integrate established DFT mechanisms that have previously been spread among different architectures. In addition, I also identify some novel computational and architectural mechanisms of DFT which may be implemented in neuromorphic VLSI devices using WTA networks as an intermediate computational layer. These specific mechanisms include the stabilization of working memory, the coupling of sensory systems to motor dynamics, intentionality, and autonomous learning. I further demonstrate how all these elements may be integrated into a unified architecture to generate behavior and autonomous learning.

  3. Learning sequential control in a Neural Blackboard Architecture for in situ concept reasoning

    NARCIS (Netherlands)

    van der Velde, Frank; van der Velde, Frank; Besold, Tarek R.; Lamb, Luis; Serafini, Luciano; Tabor, Whitney

    2016-01-01

    Simulations are presented and discussed of learning sequential control in a Neural Blackboard Architecture (NBA) for in situ concept-based reasoning. Sequential control is learned in a reservoir network, consisting of columns with neural circuits. This allows the reservoir to control the dynamics of

  4. Learning, memory, and the role of neural network architecture.

    Directory of Open Access Journals (Sweden)

    Ann M Hermundstad

    2011-06-01

    Full Text Available The performance of information processing systems, from artificial neural networks to natural neuronal ensembles, depends heavily on the underlying system architecture. In this study, we compare the performance of parallel and layered network architectures during sequential tasks that require both acquisition and retention of information, thereby identifying tradeoffs between learning and memory processes. During the task of supervised, sequential function approximation, networks produce and adapt representations of external information. Performance is evaluated by statistically analyzing the error in these representations while varying the initial network state, the structure of the external information, and the time given to learn the information. We link performance to complexity in network architecture by characterizing local error landscape curvature. We find that variations in error landscape structure give rise to tradeoffs in performance; these include the ability of the network to maximize accuracy versus minimize inaccuracy and produce specific versus generalizable representations of information. Parallel networks generate smooth error landscapes with deep, narrow minima, enabling them to find highly specific representations given sufficient time. While accurate, however, these representations are difficult to generalize. In contrast, layered networks generate rough error landscapes with a variety of local minima, allowing them to quickly find coarse representations. Although less accurate, these representations are easily adaptable. The presence of measurable performance tradeoffs in both layered and parallel networks has implications for understanding the behavior of a wide variety of natural and artificial learning systems.

  5. Dynamics of a neural system with a multiscale architecture

    Science.gov (United States)

    Breakspear, Michael; Stam, Cornelis J

    2005-01-01

    The architecture of the brain is characterized by a modular organization repeated across a hierarchy of spatial scales—neurons, minicolumns, cortical columns, functional brain regions, and so on. It is important to consider that the processes governing neural dynamics at any given scale are not only determined by the behaviour of other neural structures at that scale, but also by the emergent behaviour of smaller scales, and the constraining influence of activity at larger scales. In this paper, we introduce a theoretical framework for neural systems in which the dynamics are nested within a multiscale architecture. In essence, the dynamics at each scale are determined by a coupled ensemble of nonlinear oscillators, which embody the principle scale-specific neurobiological processes. The dynamics at larger scales are ‘slaved’ to the emergent behaviour of smaller scales through a coupling function that depends on a multiscale wavelet decomposition. The approach is first explicated mathematically. Numerical examples are then given to illustrate phenomena such as between-scale bifurcations, and how synchronization in small-scale structures influences the dynamics in larger structures in an intuitive manner that cannot be captured by existing modelling approaches. A framework for relating the dynamical behaviour of the system to measured observables is presented and further extensions to capture wave phenomena and mode coupling are suggested. PMID:16087448

  6. An efficient optical architecture for sparsely connected neural networks

    Science.gov (United States)

    Hine, Butler P., III; Downie, John D.; Reid, Max B.

    1990-01-01

    An architecture for general-purpose optical neural network processor is presented in which the interconnections and weights are formed by directing coherent beams holographically, thereby making use of the space-bandwidth products of the recording medium for sparsely interconnected networks more efficiently that the commonly used vector-matrix multiplier, since all of the hologram area is in use. An investigation is made of the use of computer-generated holograms recorded on such updatable media as thermoplastic materials, in order to define the interconnections and weights of a neural network processor; attention is given to limits on interconnection densities, diffraction efficiencies, and weighing accuracies possible with such an updatable thin film holographic device.

  7. Convolutional neural networks for event-related potential detection: impact of the architecture.

    Science.gov (United States)

    Cecotti, H

    2017-07-01

    The detection of brain responses at the single-trial level in the electroencephalogram (EEG) such as event-related potentials (ERPs) is a difficult problem that requires different processing steps to extract relevant discriminant features. While most of the signal and classification techniques for the detection of brain responses are based on linear algebra, different pattern recognition techniques such as convolutional neural network (CNN), as a type of deep learning technique, have shown some interests as they are able to process the signal after limited pre-processing. In this study, we propose to investigate the performance of CNNs in relation of their architecture and in relation to how they are evaluated: a single system for each subject, or a system for all the subjects. More particularly, we want to address the change of performance that can be observed between specifying a neural network to a subject, or by considering a neural network for a group of subjects, taking advantage of a larger number of trials from different subjects. The results support the conclusion that a convolutional neural network trained on different subjects can lead to an AUC above 0.9 by using an appropriate architecture using spatial filtering and shift invariant layers.

  8. Optimal artificial neural network architecture selection for performance prediction of compact heat exchanger with the EBaLM-OTR technique

    Energy Technology Data Exchange (ETDEWEB)

    Wijayasekara, Dumidu, E-mail: wija2589@vandals.uidaho.edu [Department of Computer Science, University of Idaho, 1776 Science Center Drive, Idaho Falls, ID 83402 (United States); Manic, Milos [Department of Computer Science, University of Idaho, 1776 Science Center Drive, Idaho Falls, ID 83402 (United States); Sabharwall, Piyush [Idaho National Laboratory, Idaho Falls, ID (United States); Utgikar, Vivek [Department of Chemical Engineering, University of Idaho, Idaho Falls, ID 83402 (United States)

    2011-07-15

    Highlights: > Performance prediction of PCHE using artificial neural networks. > Evaluating artificial neural network performance for PCHE modeling. > Selection of over-training resilient artificial neural networks. > Artificial neural network architecture selection for modeling problems with small data sets. - Abstract: Artificial Neural Networks (ANN) have been used in the past to predict the performance of printed circuit heat exchangers (PCHE) with satisfactory accuracy. Typically published literature has focused on optimizing ANN using a training dataset to train the network and a testing dataset to evaluate it. Although this may produce outputs that agree with experimental results, there is a risk of over-training or over-learning the network rather than generalizing it, which should be the ultimate goal. An over-trained network is able to produce good results with the training dataset but fails when new datasets with subtle changes are introduced. In this paper we present EBaLM-OTR (error back propagation and Levenberg-Marquardt algorithms for over training resilience) technique, which is based on a previously discussed method of selecting neural network architecture that uses a separate validation set to evaluate different network architectures based on mean square error (MSE), and standard deviation of MSE. The method uses k-fold cross validation. Therefore in order to select the optimal architecture for the problem, the dataset is divided into three parts which are used to train, validate and test each network architecture. Then each architecture is evaluated according to their generalization capability and capability to conform to original data. The method proved to be a comprehensive tool in identifying the weaknesses and advantages of different network architectures. The method also highlighted the fact that the architecture with the lowest training error is not always the most generalized and therefore not the optimal. Using the method the testing

  9. Optimal artificial neural network architecture selection for performance prediction of compact heat exchanger with the EBaLM-OTR technique

    International Nuclear Information System (INIS)

    Wijayasekara, Dumidu; Manic, Milos; Sabharwall, Piyush; Utgikar, Vivek

    2011-01-01

    Highlights: → Performance prediction of PCHE using artificial neural networks. → Evaluating artificial neural network performance for PCHE modeling. → Selection of over-training resilient artificial neural networks. → Artificial neural network architecture selection for modeling problems with small data sets. - Abstract: Artificial Neural Networks (ANN) have been used in the past to predict the performance of printed circuit heat exchangers (PCHE) with satisfactory accuracy. Typically published literature has focused on optimizing ANN using a training dataset to train the network and a testing dataset to evaluate it. Although this may produce outputs that agree with experimental results, there is a risk of over-training or over-learning the network rather than generalizing it, which should be the ultimate goal. An over-trained network is able to produce good results with the training dataset but fails when new datasets with subtle changes are introduced. In this paper we present EBaLM-OTR (error back propagation and Levenberg-Marquardt algorithms for over training resilience) technique, which is based on a previously discussed method of selecting neural network architecture that uses a separate validation set to evaluate different network architectures based on mean square error (MSE), and standard deviation of MSE. The method uses k-fold cross validation. Therefore in order to select the optimal architecture for the problem, the dataset is divided into three parts which are used to train, validate and test each network architecture. Then each architecture is evaluated according to their generalization capability and capability to conform to original data. The method proved to be a comprehensive tool in identifying the weaknesses and advantages of different network architectures. The method also highlighted the fact that the architecture with the lowest training error is not always the most generalized and therefore not the optimal. Using the method the

  10. A framework for plasticity implementation on the SpiNNaker neural architecture.

    Science.gov (United States)

    Galluppi, Francesco; Lagorce, Xavier; Stromatias, Evangelos; Pfeiffer, Michael; Plana, Luis A; Furber, Steve B; Benosman, Ryad B

    2014-01-01

    Many of the precise biological mechanisms of synaptic plasticity remain elusive, but simulations of neural networks have greatly enhanced our understanding of how specific global functions arise from the massively parallel computation of neurons and local Hebbian or spike-timing dependent plasticity rules. For simulating large portions of neural tissue, this has created an increasingly strong need for large scale simulations of plastic neural networks on special purpose hardware platforms, because synaptic transmissions and updates are badly matched to computing style supported by current architectures. Because of the great diversity of biological plasticity phenomena and the corresponding diversity of models, there is a great need for testing various hypotheses about plasticity before committing to one hardware implementation. Here we present a novel framework for investigating different plasticity approaches on the SpiNNaker distributed digital neural simulation platform. The key innovation of the proposed architecture is to exploit the reconfigurability of the ARM processors inside SpiNNaker, dedicating a subset of them exclusively to process synaptic plasticity updates, while the rest perform the usual neural and synaptic simulations. We demonstrate the flexibility of the proposed approach by showing the implementation of a variety of spike- and rate-based learning rules, including standard Spike-Timing dependent plasticity (STDP), voltage-dependent STDP, and the rate-based BCM rule. We analyze their performance and validate them by running classical learning experiments in real time on a 4-chip SpiNNaker board. The result is an efficient, modular, flexible and scalable framework, which provides a valuable tool for the fast and easy exploration of learning models of very different kinds on the parallel and reconfigurable SpiNNaker system.

  11. On the complexity of neural network classifiers: a comparison between shallow and deep architectures.

    Science.gov (United States)

    Bianchini, Monica; Scarselli, Franco

    2014-08-01

    Recently, researchers in the artificial neural network field have focused their attention on connectionist models composed by several hidden layers. In fact, experimental results and heuristic considerations suggest that deep architectures are more suitable than shallow ones for modern applications, facing very complex problems, e.g., vision and human language understanding. However, the actual theoretical results supporting such a claim are still few and incomplete. In this paper, we propose a new approach to study how the depth of feedforward neural networks impacts on their ability in implementing high complexity functions. First, a new measure based on topological concepts is introduced, aimed at evaluating the complexity of the function implemented by a neural network, used for classification purposes. Then, deep and shallow neural architectures with common sigmoidal activation functions are compared, by deriving upper and lower bounds on their complexity, and studying how the complexity depends on the number of hidden units and the used activation function. The obtained results seem to support the idea that deep networks actually implements functions of higher complexity, so that they are able, with the same number of resources, to address more difficult problems.

  12. A modular architecture for transparent computation in recurrent neural networks.

    Science.gov (United States)

    Carmantini, Giovanni S; Beim Graben, Peter; Desroches, Mathieu; Rodrigues, Serafim

    2017-01-01

    Computation is classically studied in terms of automata, formal languages and algorithms; yet, the relation between neural dynamics and symbolic representations and operations is still unclear in traditional eliminative connectionism. Therefore, we suggest a unique perspective on this central issue, to which we would like to refer as transparent connectionism, by proposing accounts of how symbolic computation can be implemented in neural substrates. In this study we first introduce a new model of dynamics on a symbolic space, the versatile shift, showing that it supports the real-time simulation of a range of automata. We then show that the Gödelization of versatile shifts defines nonlinear dynamical automata, dynamical systems evolving on a vectorial space. Finally, we present a mapping between nonlinear dynamical automata and recurrent artificial neural networks. The mapping defines an architecture characterized by its granular modularity, where data, symbolic operations and their control are not only distinguishable in activation space, but also spatially localizable in the network itself, while maintaining a distributed encoding of symbolic representations. The resulting networks simulate automata in real-time and are programmed directly, in the absence of network training. To discuss the unique characteristics of the architecture and their consequences, we present two examples: (i) the design of a Central Pattern Generator from a finite-state locomotive controller, and (ii) the creation of a network simulating a system of interactive automata that supports the parsing of garden-path sentences as investigated in psycholinguistics experiments. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Developmental and Architectural Principles of the Lateral-line Neural Map

    Directory of Open Access Journals (Sweden)

    Hernan eLopez-Schier

    2013-03-01

    Full Text Available The transmission and central representation of sensory cues through the accurate construction of neural maps is essential for animals to react to environmental stimuli. Structural diversity of sensorineural maps along a continuum between discrete- and continuous-map architectures can influence behavior. The mechanosensory lateral line of fishes and amphibians, for example, detects complex hydrodynamics occurring around the animal body. It. It triggers innate fast escape reactions but also modulates complex navigation behaviors that require constant knowledge about the environment. The aim of this article is to summarize recent work in the zebrafish that has shed light on the development and structure of the lateralis neural map, which is helping to understand how individual sensory modalities generate appropriate behavioral responses to the sensory context.

  14. Framewise phoneme classification with bidirectional LSTM and other neural network architectures.

    Science.gov (United States)

    Graves, Alex; Schmidhuber, Jürgen

    2005-01-01

    In this paper, we present bidirectional Long Short Term Memory (LSTM) networks, and a modified, full gradient version of the LSTM learning algorithm. We evaluate Bidirectional LSTM (BLSTM) and several other network architectures on the benchmark task of framewise phoneme classification, using the TIMIT database. Our main findings are that bidirectional networks outperform unidirectional ones, and Long Short Term Memory (LSTM) is much faster and also more accurate than both standard Recurrent Neural Nets (RNNs) and time-windowed Multilayer Perceptrons (MLPs). Our results support the view that contextual information is crucial to speech processing, and suggest that BLSTM is an effective architecture with which to exploit it.

  15. Developing Dynamic Field Theory Architectures for Embodied Cognitive Systems with cedar.

    Science.gov (United States)

    Lomp, Oliver; Richter, Mathis; Zibner, Stephan K U; Schöner, Gregor

    2016-01-01

    Embodied artificial cognitive systems, such as autonomous robots or intelligent observers, connect cognitive processes to sensory and effector systems in real time. Prime candidates for such embodied intelligence are neurally inspired architectures. While components such as forward neural networks are well established, designing pervasively autonomous neural architectures remains a challenge. This includes the problem of tuning the parameters of such architectures so that they deliver specified functionality under variable environmental conditions and retain these functions as the architectures are expanded. The scaling and autonomy problems are solved, in part, by dynamic field theory (DFT), a theoretical framework for the neural grounding of sensorimotor and cognitive processes. In this paper, we address how to efficiently build DFT architectures that control embodied agents and how to tune their parameters so that the desired cognitive functions emerge while such agents are situated in real environments. In DFT architectures, dynamic neural fields or nodes are assigned dynamic regimes, that is, attractor states and their instabilities, from which cognitive function emerges. Tuning thus amounts to determining values of the dynamic parameters for which the components of a DFT architecture are in the specified dynamic regime under the appropriate environmental conditions. The process of tuning is facilitated by the software framework cedar , which provides a graphical interface to build and execute DFT architectures. It enables to change dynamic parameters online and visualize the activation states of any component while the agent is receiving sensory inputs in real time. Using a simple example, we take the reader through the workflow of conceiving of DFT architectures, implementing them on embodied agents, tuning their parameters, and assessing performance while the system is coupled to real sensory inputs.

  16. REAL-TIME VIDEO SCALING BASED ON CONVOLUTION NEURAL NETWORK ARCHITECTURE

    OpenAIRE

    S Safinaz; A V Ravi Kumar

    2017-01-01

    In recent years, video super resolution techniques becomes mandatory requirements to get high resolution videos. Many super resolution techniques researched but still video super resolution or scaling is a vital challenge. In this paper, we have presented a real-time video scaling based on convolution neural network architecture to eliminate the blurriness in the images and video frames and to provide better reconstruction quality while scaling of large datasets from lower resolution frames t...

  17. Approach to design neural cryptography: a generalized architecture and a heuristic rule.

    Science.gov (United States)

    Mu, Nankun; Liao, Xiaofeng; Huang, Tingwen

    2013-06-01

    Neural cryptography, a type of public key exchange protocol, is widely considered as an effective method for sharing a common secret key between two neural networks on public channels. How to design neural cryptography remains a great challenge. In this paper, in order to provide an approach to solve this challenge, a generalized network architecture and a significant heuristic rule are designed. The proposed generic framework is named as tree state classification machine (TSCM), which extends and unifies the existing structures, i.e., tree parity machine (TPM) and tree committee machine (TCM). Furthermore, we carefully study and find that the heuristic rule can improve the security of TSCM-based neural cryptography. Therefore, TSCM and the heuristic rule can guide us to designing a great deal of effective neural cryptography candidates, in which it is possible to achieve the more secure instances. Significantly, in the light of TSCM and the heuristic rule, we further expound that our designed neural cryptography outperforms TPM (the most secure model at present) on security. Finally, a series of numerical simulation experiments are provided to verify validity and applicability of our results.

  18. Deep Neural Architectures for Mapping Scalp to Intracranial EEG.

    Science.gov (United States)

    Antoniades, Andreas; Spyrou, Loukianos; Martin-Lopez, David; Valentin, Antonio; Alarcon, Gonzalo; Sanei, Saeid; Took, Clive Cheong

    2018-03-19

    Data is often plagued by noise which encumbers machine learning of clinically useful biomarkers and electroencephalogram (EEG) data is no exemption. Intracranial EEG (iEEG) data enhances the training of deep learning models of the human brain, yet is often prohibitive due to the invasive recording process. A more convenient alternative is to record brain activity using scalp electrodes. However, the inherent noise associated with scalp EEG data often impedes the learning process of neural models, achieving substandard performance. Here, an ensemble deep learning architecture for nonlinearly mapping scalp to iEEG data is proposed. The proposed architecture exploits the information from a limited number of joint scalp-intracranial recording to establish a novel methodology for detecting the epileptic discharges from the sEEG of a general population of subjects. Statistical tests and qualitative analysis have revealed that the generated pseudo-intracranial data are highly correlated with the true intracranial data. This facilitated the detection of IEDs from the scalp recordings where such waveforms are not often visible. As a real-world clinical application, these pseudo-iEEGs are then used by a convolutional neural network for the automated classification of intracranial epileptic discharges (IEDs) and non-IED of trials in the context of epilepsy analysis. Although the aim of this work was to circumvent the unavailability of iEEG and the limitations of sEEG, we have achieved a classification accuracy of 68% an increase of 6% over the previously proposed linear regression mapping.

  19. Developing dynamic field theory architectures for embodied cognitive systems with cedar

    Directory of Open Access Journals (Sweden)

    Oliver Lomp

    2016-11-01

    Full Text Available Embodied artificial cognitive systems such as autonomous robots or intelligent observers connect cognitive processes to sensory and effector systems in real time. Prime candidates for such embodied intelligence are neurally inspired architectures. While components such as forward neural networks are well established, designing pervasively autonomous neural architectures remains a challenge. This includes the problem of tuning the parameters of such architectures so that they deliver specified functionality under variable environmental conditions and retain these functions as the architectures are expanded. The scaling and autonomy problems are solved, in part, by dynamic field theory (DFT, a theoretical framework for the neural grounding of sensorimotor and cognitive processes. In this paper, we address how to efficiently build DFT architectures that control embodied agents and how to tune their parameters so that the desired cognitive functions emerge while such agents are situated in real environments. In DFT architectures, dynamic neural fields or nodes are assigned dynamic regimes, that is, attractor states and their instabilities, from which cognitive function emerges. Tuning thus amounts to determining values of the dynamic parameters for which the components of a DFT architecture are in the specified dynamic regime under the appropriate environmental conditions. The process of tuning is facilitated by the software framework cedar, which provides a graphical interface to build and execute DFT architectures. It enables to change dynamic parameters online and visualize the activation states of any component while the agent is receiving sensory inputs in real-time. Using a simple example, we take the reader through the workflow of conceiving of DFT architectures, implementing them on embodied agents, tuning their parameters, and assessing performance while the system is coupled to real sensory inputs.

  20. The Functional Architecture of the Brain Underlies Strategic Deception in Impression Management.

    Science.gov (United States)

    Luo, Qiang; Ma, Yina; Bhatt, Meghana A; Montague, P Read; Feng, Jianfeng

    2017-01-01

    Impression management, as one of the most essential skills of social function, impacts one's survival and success in human societies. However, the neural architecture underpinning this social skill remains poorly understood. By employing a two-person bargaining game, we exposed three strategies involving distinct cognitive processes for social impression management with different levels of strategic deception. We utilized a novel adaptation of Granger causality accounting for signal-dependent noise (SDN), which captured the directional connectivity underlying the impression management during the bargaining game. We found that the sophisticated strategists engaged stronger directional connectivity from both dorsal anterior cingulate cortex and retrosplenial cortex to rostral prefrontal cortex, and the strengths of these directional influences were associated with higher level of deception during the game. Using the directional connectivity as a neural signature, we identified the strategic deception with 80% accuracy by a machine-learning classifier. These results suggest that different social strategies are supported by distinct patterns of directional connectivity among key brain regions for social cognition.

  1. REAL-TIME VIDEO SCALING BASED ON CONVOLUTION NEURAL NETWORK ARCHITECTURE

    Directory of Open Access Journals (Sweden)

    S Safinaz

    2017-08-01

    Full Text Available In recent years, video super resolution techniques becomes mandatory requirements to get high resolution videos. Many super resolution techniques researched but still video super resolution or scaling is a vital challenge. In this paper, we have presented a real-time video scaling based on convolution neural network architecture to eliminate the blurriness in the images and video frames and to provide better reconstruction quality while scaling of large datasets from lower resolution frames to high resolution frames. We compare our outcomes with multiple exiting algorithms. Our extensive results of proposed technique RemCNN (Reconstruction error minimization Convolution Neural Network shows that our model outperforms the existing technologies such as bicubic, bilinear, MCResNet and provide better reconstructed motioning images and video frames. The experimental results shows that our average PSNR result is 47.80474 considering upscale-2, 41.70209 for upscale-3 and 36.24503 for upscale-4 for Myanmar dataset which is very high in contrast to other existing techniques. This results proves our proposed model real-time video scaling based on convolution neural network architecture’s high efficiency and better performance.

  2. Architecture and biological applications of artificial neural networks: a tuberculosis perspective.

    Science.gov (United States)

    Darsey, Jerry A; Griffin, William O; Joginipelli, Sravanthi; Melapu, Venkata Kiran

    2015-01-01

    Advancement of science and technology has prompted researchers to develop new intelligent systems that can solve a variety of problems such as pattern recognition, prediction, and optimization. The ability of the human brain to learn in a fashion that tolerates noise and error has attracted many researchers and provided the starting point for the development of artificial neural networks: the intelligent systems. Intelligent systems can acclimatize to the environment or data and can maximize the chances of success or improve the efficiency of a search. Due to massive parallelism with large numbers of interconnected processers and their ability to learn from the data, neural networks can solve a variety of challenging computational problems. Neural networks have the ability to derive meaning from complicated and imprecise data; they are used in detecting patterns, and trends that are too complex for humans, or other computer systems. Solutions to the toughest problems will not be found through one narrow specialization; therefore we need to combine interdisciplinary approaches to discover the solutions to a variety of problems. Many researchers in different disciplines such as medicine, bioinformatics, molecular biology, and pharmacology have successfully applied artificial neural networks. This chapter helps the reader in understanding the basics of artificial neural networks, their applications, and methodology; it also outlines the network learning process and architecture. We present a brief outline of the application of neural networks to medical diagnosis, drug discovery, gene identification, and protein structure prediction. We conclude with a summary of the results from our study on tuberculosis data using neural networks, in diagnosing active tuberculosis, and predicting chronic vs. infiltrative forms of tuberculosis.

  3. Evolution of genetic architecture under directional selection.

    Science.gov (United States)

    Hansen, Thomas F; Alvarez-Castro, José M; Carter, Ashley J R; Hermisson, Joachim; Wagner, Günter P

    2006-08-01

    We investigate the multilinear epistatic model under mutation-limited directional selection. We confirm previous results that only directional epistasis, in which genes on average reinforce or diminish each other's effects, contribute to the initial evolution of mutational effects. Thus, either canalization or decanalization can occur under directional selection, depending on whether positive or negative epistasis is prevalent. We then focus on the evolution of the epistatic coefficients themselves. In the absence of higher-order epistasis, positive pairwise epistasis will tend to weaken relative to additive effects, while negative pairwise epistasis will tend to become strengthened. Positive third-order epistasis will counteract these effects, while negative third-order epistasis will reinforce them. More generally, gene interactions of all orders have an inherent tendency for negative changes under directional selection, which can only be modified by higher-order directional epistasis. We identify three types of nonadditive quasi-equilibrium architectures that, although not strictly stable, can be maintained for an extended time: (1) nondirectional epistatic architectures; (2) canalized architectures with strong epistasis; and (3) near-additive architectures in which additive effects keep increasing relative to epistasis.

  4. Neural control of magnetic suspension systems

    Science.gov (United States)

    Gray, W. Steven

    1993-01-01

    The purpose of this research program is to design, build and test (in cooperation with NASA personnel from the NASA Langley Research Center) neural controllers for two different small air-gap magnetic suspension systems. The general objective of the program is to study neural network architectures for the purpose of control in an experimental setting and to demonstrate the feasibility of the concept. The specific objectives of the research program are: (1) to demonstrate through simulation and experimentation the feasibility of using neural controllers to stabilize a nonlinear magnetic suspension system; (2) to investigate through simulation and experimentation the performance of neural controllers designs under various types of parametric and nonparametric uncertainty; (3) to investigate through simulation and experimentation various types of neural architectures for real-time control with respect to performance and complexity; and (4) to benchmark in an experimental setting the performance of neural controllers against other types of existing linear and nonlinear compensator designs. To date, the first one-dimensional, small air-gap magnetic suspension system has been built, tested and delivered to the NASA Langley Research Center. The device is currently being stabilized with a digital linear phase-lead controller. The neural controller hardware is under construction. Two different neural network paradigms are under consideration, one based on hidden layer feedforward networks trained via back propagation and one based on using Gaussian radial basis functions trained by analytical methods related to stability conditions. Some advanced nonlinear control algorithms using feedback linearization and sliding mode control are in simulation studies.

  5. Comparison of Classifier Architectures for Online Neural Spike Sorting.

    Science.gov (United States)

    Saeed, Maryam; Khan, Amir Ali; Kamboh, Awais Mehmood

    2017-04-01

    High-density, intracranial recordings from micro-electrode arrays need to undergo Spike Sorting in order to associate the recorded neuronal spikes to particular neurons. This involves spike detection, feature extraction, and classification. To reduce the data transmission and power requirements, on-chip real-time processing is becoming very popular. However, high computational resources are required for classifiers in on-chip spike-sorters, making scalability a great challenge. In this review paper, we analyze several popular classifiers to propose five new hardware architectures using the off-chip training with on-chip classification approach. These include support vector classification, fuzzy C-means classification, self-organizing maps classification, moving-centroid K-means classification, and Cosine distance classification. The performance of these architectures is analyzed in terms of accuracy and resource requirement. We establish that the neural networks based Self-Organizing Maps classifier offers the most viable solution. A spike sorter based on the Self-Organizing Maps classifier, requires only 7.83% of computational resources of the best-reported spike sorter, hierarchical adaptive means, while offering a 3% better accuracy at 7 dB SNR.

  6. The Functional Architecture of the Brain Underlies Strategic Deception in Impression Management

    Directory of Open Access Journals (Sweden)

    Qiang Luo

    2017-11-01

    Full Text Available Impression management, as one of the most essential skills of social function, impacts one's survival and success in human societies. However, the neural architecture underpinning this social skill remains poorly understood. By employing a two-person bargaining game, we exposed three strategies involving distinct cognitive processes for social impression management with different levels of strategic deception. We utilized a novel adaptation of Granger causality accounting for signal-dependent noise (SDN, which captured the directional connectivity underlying the impression management during the bargaining game. We found that the sophisticated strategists engaged stronger directional connectivity from both dorsal anterior cingulate cortex and retrosplenial cortex to rostral prefrontal cortex, and the strengths of these directional influences were associated with higher level of deception during the game. Using the directional connectivity as a neural signature, we identified the strategic deception with 80% accuracy by a machine-learning classifier. These results suggest that different social strategies are supported by distinct patterns of directional connectivity among key brain regions for social cognition.

  7. Seafloor classification using echo- waveforms: A method employing hybrid neural network architecture

    Digital Repository Service at National Institute of Oceanography (India)

    Chakraborty, B.; Mahale, V.; DeSouza, C.; Das, P.

    , neural network architecture, seafloor classification, self-organizing feature map (SOFM). I. INTRODUCTION S EAFLOOR classification and characterization using re- mote high-frequency acoustic system has been recognized as a useful tool (see [1...] and references therein). The seafloor’s characteristics are extremely complicated due to variations of the many parameters at different scales. The parameters include sediment grain size, relief height at the water–sediment inter- face, and variations within...

  8. Selection of an optimal neural network architecture for computer-aided detection of microcalcifications - Comparison of automated optimization techniques

    International Nuclear Information System (INIS)

    Gurcan, Metin N.; Sahiner, Berkman; Chan Heangping; Hadjiiski, Lubomir; Petrick, Nicholas

    2001-01-01

    Many computer-aided diagnosis (CAD) systems use neural networks (NNs) for either detection or classification of abnormalities. Currently, most NNs are 'optimized' by manual search in a very limited parameter space. In this work, we evaluated the use of automated optimization methods for selecting an optimal convolution neural network (CNN) architecture. Three automated methods, the steepest descent (SD), the simulated annealing (SA), and the genetic algorithm (GA), were compared. We used as an example the CNN that classifies true and false microcalcifications detected on digitized mammograms by a prescreening algorithm. Four parameters of the CNN architecture were considered for optimization, the numbers of node groups and the filter kernel sizes in the first and second hidden layers, resulting in a search space of 432 possible architectures. The area A z under the receiver operating characteristic (ROC) curve was used to design a cost function. The SA experiments were conducted with four different annealing schedules. Three different parent selection methods were compared for the GA experiments. An available data set was split into two groups with approximately equal number of samples. By using the two groups alternately for training and testing, two different cost surfaces were evaluated. For the first cost surface, the SD method was trapped in a local minimum 91% (392/432) of the time. The SA using the Boltzman schedule selected the best architecture after evaluating, on average, 167 architectures. The GA achieved its best performance with linearly scaled roulette-wheel parent selection; however, it evaluated 391 different architectures, on average, to find the best one. The second cost surface contained no local minimum. For this surface, a simple SD algorithm could quickly find the global minimum, but the SA with the very fast reannealing schedule was still the most efficient. The same SA scheme, however, was trapped in a local minimum on the first cost

  9. The Role of Architectural and Learning Constraints in Neural Network Models: A Case Study on Visual Space Coding.

    Science.gov (United States)

    Testolin, Alberto; De Filippo De Grazia, Michele; Zorzi, Marco

    2017-01-01

    The recent "deep learning revolution" in artificial neural networks had strong impact and widespread deployment for engineering applications, but the use of deep learning for neurocomputational modeling has been so far limited. In this article we argue that unsupervised deep learning represents an important step forward for improving neurocomputational models of perception and cognition, because it emphasizes the role of generative learning as opposed to discriminative (supervised) learning. As a case study, we present a series of simulations investigating the emergence of neural coding of visual space for sensorimotor transformations. We compare different network architectures commonly used as building blocks for unsupervised deep learning by systematically testing the type of receptive fields and gain modulation developed by the hidden neurons. In particular, we compare Restricted Boltzmann Machines (RBMs), which are stochastic, generative networks with bidirectional connections trained using contrastive divergence, with autoencoders, which are deterministic networks trained using error backpropagation. For both learning architectures we also explore the role of sparse coding, which has been identified as a fundamental principle of neural computation. The unsupervised models are then compared with supervised, feed-forward networks that learn an explicit mapping between different spatial reference frames. Our simulations show that both architectural and learning constraints strongly influenced the emergent coding of visual space in terms of distribution of tuning functions at the level of single neurons. Unsupervised models, and particularly RBMs, were found to more closely adhere to neurophysiological data from single-cell recordings in the primate parietal cortex. These results provide new insights into how basic properties of artificial neural networks might be relevant for modeling neural information processing in biological systems.

  10. SNAVA-A real-time multi-FPGA multi-model spiking neural network simulation architecture.

    Science.gov (United States)

    Sripad, Athul; Sanchez, Giovanny; Zapata, Mireya; Pirrone, Vito; Dorta, Taho; Cambria, Salvatore; Marti, Albert; Krishnamourthy, Karthikeyan; Madrenas, Jordi

    2018-01-01

    Spiking Neural Networks (SNN) for Versatile Applications (SNAVA) simulation platform is a scalable and programmable parallel architecture that supports real-time, large-scale, multi-model SNN computation. This parallel architecture is implemented in modern Field-Programmable Gate Arrays (FPGAs) devices to provide high performance execution and flexibility to support large-scale SNN models. Flexibility is defined in terms of programmability, which allows easy synapse and neuron implementation. This has been achieved by using a special-purpose Processing Elements (PEs) for computing SNNs, and analyzing and customizing the instruction set according to the processing needs to achieve maximum performance with minimum resources. The parallel architecture is interfaced with customized Graphical User Interfaces (GUIs) to configure the SNN's connectivity, to compile the neuron-synapse model and to monitor SNN's activity. Our contribution intends to provide a tool that allows to prototype SNNs faster than on CPU/GPU architectures but significantly cheaper than fabricating a customized neuromorphic chip. This could be potentially valuable to the computational neuroscience and neuromorphic engineering communities. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Deciphering the Cognitive and Neural Mechanisms Underlying ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Deciphering the Cognitive and Neural Mechanisms Underlying Auditory Learning. This project seeks to understand the brain mechanisms necessary for people to learn to perceive sounds. Neural circuits and learning. The research team will test people with and without musical training to evaluate their capacity to learn ...

  12. Neural Architecture for Feature Binding in Visual Working Memory.

    Science.gov (United States)

    Schneegans, Sebastian; Bays, Paul M

    2017-04-05

    Binding refers to the operation that groups different features together into objects. We propose a neural architecture for feature binding in visual working memory that employs populations of neurons with conjunction responses. We tested this model using cued recall tasks, in which subjects had to memorize object arrays composed of simple visual features (color, orientation, and location). After a brief delay, one feature of one item was given as a cue, and the observer had to report, on a continuous scale, one or two other features of the cued item. Binding failure in this task is associated with swap errors, in which observers report an item other than the one indicated by the cue. We observed that the probability of swapping two items strongly correlated with the items' similarity in the cue feature dimension, and found a strong correlation between swap errors occurring in spatial and nonspatial report. The neural model explains both swap errors and response variability as results of decoding noisy neural activity, and can account for the behavioral results in quantitative detail. We then used the model to compare alternative mechanisms for binding nonspatial features. We found the behavioral results fully consistent with a model in which nonspatial features are bound exclusively via their shared location, with no indication of direct binding between color and orientation. These results provide evidence for a special role of location in feature binding, and the model explains how this special role could be realized in the neural system. SIGNIFICANCE STATEMENT The problem of feature binding is of central importance in understanding the mechanisms of working memory. How do we remember not only that we saw a red and a round object, but that these features belong together to a single object rather than to different objects in our environment? Here we present evidence for a neural mechanism for feature binding in working memory, based on encoding of visual

  13. Architecture and performance of neural networks for efficient A/C control in buildings

    International Nuclear Information System (INIS)

    Mahmoud, Mohamed A.; Ben-Nakhi, Abdullatif E.

    2003-01-01

    The feasibility of using neural networks (NNs) for optimizing air conditioning (AC) setback scheduling in public buildings was investigated. The main focus is on optimizing the network architecture in order to achieve best performance. To save energy, the temperature inside public buildings is allowed to rise after business hours by setting back the thermostat. The objective is to predict the time of the end of thermostat setback (EoS) such that the design temperature inside the building is restored in time for the start of business hours. State of the art building simulation software, ESP-r, was used to generate a database that covered the years 1995-1999. The software was used to calculate the EoS for two office buildings using the climate records in Kuwait. The EoS data for 1995 and 1996 were used for training and testing the NNs. The robustness of the trained NN was tested by applying them to a 'production' data set (1997-1999), which the networks have never 'seen' before. For each of the six different NN architectures evaluated, parametric studies were performed to determine the network parameters that best predict the EoS. External hourly temperature readings were used as network inputs, and the thermostat end of setback (EoS) is the output. The NN predictions were improved by developing a neural control scheme (NC). This scheme is based on using the temperature readings as they become available. For each NN architecture considered, six NNs were designed and trained for this purpose. The performance of the NN analysis was evaluated using a statistical indicator (the coefficient of multiple determination) and by statistical analysis of the error patterns, including ANOVA (analysis of variance). The results show that the NC, when used with a properly designed NN, is a powerful instrument for optimizing AC setback scheduling based only on external temperature records

  14. Learning speaker-specific characteristics with a deep neural architecture.

    Science.gov (United States)

    Chen, Ke; Salman, Ahmad

    2011-11-01

    Speech signals convey various yet mixed information ranging from linguistic to speaker-specific information. However, most of acoustic representations characterize all different kinds of information as whole, which could hinder either a speech or a speaker recognition (SR) system from producing a better performance. In this paper, we propose a novel deep neural architecture (DNA) especially for learning speaker-specific characteristics from mel-frequency cepstral coefficients, an acoustic representation commonly used in both speech recognition and SR, which results in a speaker-specific overcomplete representation. In order to learn intrinsic speaker-specific characteristics, we come up with an objective function consisting of contrastive losses in terms of speaker similarity/dissimilarity and data reconstruction losses used as regularization to normalize the interference of non-speaker-related information. Moreover, we employ a hybrid learning strategy for learning parameters of the deep neural networks: i.e., local yet greedy layerwise unsupervised pretraining for initialization and global supervised learning for the ultimate discriminative goal. With four Linguistic Data Consortium (LDC) benchmarks and two non-English corpora, we demonstrate that our overcomplete representation is robust in characterizing various speakers, no matter whether their utterances have been used in training our DNA, and highly insensitive to text and languages spoken. Extensive comparative studies suggest that our approach yields favorite results in speaker verification and segmentation. Finally, we discuss several issues concerning our proposed approach.

  15. Formation process of Malaysian modern architecture under influence of nationalism

    OpenAIRE

    宇高, 雄志; 山崎, 大智

    2001-01-01

    This paper examines the Formation Process of Malaysian Modern Architecture under Influence of Nationalism,through the process of independence of Malaysia. The national style as "Malaysian national architecture" which hasengaged on background of political environment under the post colonial situation. Malaysian urban design is alsodetermined under the balance of both of ethnic culture and the national culture. In Malaysia, they decided to choosethe Malay ethnic culture as the national culture....

  16. Interactive natural language acquisition in a multi-modal recurrent neural architecture

    Science.gov (United States)

    Heinrich, Stefan; Wermter, Stefan

    2018-01-01

    For the complex human brain that enables us to communicate in natural language, we gathered good understandings of principles underlying language acquisition and processing, knowledge about sociocultural conditions, and insights into activity patterns in the brain. However, we were not yet able to understand the behavioural and mechanistic characteristics for natural language and how mechanisms in the brain allow to acquire and process language. In bridging the insights from behavioural psychology and neuroscience, the goal of this paper is to contribute a computational understanding of appropriate characteristics that favour language acquisition. Accordingly, we provide concepts and refinements in cognitive modelling regarding principles and mechanisms in the brain and propose a neurocognitively plausible model for embodied language acquisition from real-world interaction of a humanoid robot with its environment. In particular, the architecture consists of a continuous time recurrent neural network, where parts have different leakage characteristics and thus operate on multiple timescales for every modality and the association of the higher level nodes of all modalities into cell assemblies. The model is capable of learning language production grounded in both, temporal dynamic somatosensation and vision, and features hierarchical concept abstraction, concept decomposition, multi-modal integration, and self-organisation of latent representations.

  17. Experimental study and artificial neural network modeling of tartrazine removal by photocatalytic process under solar light.

    Science.gov (United States)

    Sebti, Aicha; Souahi, Fatiha; Mohellebi, Faroudja; Igoud, Sadek

    2017-07-01

    This research focuses on the application of an artificial neural network (ANN) to predict the removal efficiency of tartrazine from simulated wastewater using a photocatalytic process under solar illumination. A program is developed in Matlab software to optimize the neural network architecture and select the suitable combination of training algorithm, activation function and hidden neurons number. The experimental results of a batch reactor operated under different conditions of pH, TiO 2 concentration, initial organic pollutant concentration and solar radiation intensity are used to train, validate and test the networks. While negligible mineralization is demonstrated, the experimental results show that under sunlight irradiation, 85% of tartrazine is removed after 300 min using only 0.3 g/L of TiO 2 powder. Therefore, irradiation time is prolonged and almost 66% of total organic carbon is reduced after 15 hours. ANN 5-8-1 with Bayesian regulation back-propagation algorithm and hyperbolic tangent sigmoid transfer function is found to be able to predict the response with high accuracy. In addition, the connection weights approach is used to assess the importance contribution of each input variable on the ANN model response. Among the five experimental parameters, the irradiation time has the greatest effect on the removal efficiency of tartrazine.

  18. Optimization of neural network architecture for classification of radar jamming FM signals

    Science.gov (United States)

    Soto, Alberto; Mendoza, Ariadna; Flores, Benjamin C.

    2017-05-01

    The purpose of this study is to investigate several artificial Neural Network (NN) architectures in order to design a cognitive radar system capable of optimally distinguishing linear Frequency-Modulated (FM) signals from bandlimited Additive White Gaussian Noise (AWGN). The goal is to create a theoretical framework to determine an optimal NN architecture to achieve a Probability of Detection (PD) of 95% or higher and a Probability of False Alarm (PFA) of 1.5% or lower at 5 dB Signal to Noise Ratio (SNR). Literature research reveals that the frequency-domain power spectral densities characterize a signal more efficiently than its time-domain counterparts. Therefore, the input data is preprocessed by calculating the magnitude square of the Discrete Fourier Transform of the digitally sampled bandlimited AWGN and linear FM signals to populate a matrix containing N number of samples and M number of spectra. This matrix is used as input for the NN, and the spectra are divided as follows: 70% for training, 15% for validation, and 15% for testing. The study begins by experimentally deducing the optimal number of hidden neurons (1-40 neurons), then the optimal number of hidden layers (1-5 layers), and lastly, the most efficient learning algorithm. The training algorithms examined are: Resilient Backpropagation, Scaled Conjugate Gradient, Conjugate Gradient with Powell/Beale Restarts, Polak-Ribiére Conjugate Gradient, and Variable Learning Rate Backpropagation. We determine that an architecture with ten hidden neurons (or higher), one hidden layer, and a Scaled Conjugate Gradient for training algorithm encapsulates an optimal architecture for our application.

  19. The Functional Architecture of the Brain Underlies Strategic Deception in Impression Management

    OpenAIRE

    Qiang Luo; Qiang Luo; Yina Ma; Yina Ma; Meghana A. Bhatt; Meghana A. Bhatt; P. Read Montague; P. Read Montague; P. Read Montague; Jianfeng Feng; Jianfeng Feng; Jianfeng Feng; Jianfeng Feng; Jianfeng Feng

    2017-01-01

    Impression management, as one of the most essential skills of social function, impacts one's survival and success in human societies. However, the neural architecture underpinning this social skill remains poorly understood. By employing a two-person bargaining game, we exposed three strategies involving distinct cognitive processes for social impression management with different levels of strategic deception. We utilized a novel adaptation of Granger causality accounting for signal-dependent...

  20. Tracting the neural basis of music: Deficient structural connectivity underlying acquired amusia.

    Science.gov (United States)

    Sihvonen, Aleksi J; Ripollés, Pablo; Särkämö, Teppo; Leo, Vera; Rodríguez-Fornells, Antoni; Saunavaara, Jani; Parkkola, Riitta; Soinila, Seppo

    2017-12-01

    Acquired amusia provides a unique opportunity to investigate the fundamental neural architectures of musical processing due to the transition from a functioning to defective music processing system. Yet, the white matter (WM) deficits in amusia remain systematically unexplored. To evaluate which WM structures form the neural basis for acquired amusia and its recovery, we studied 42 stroke patients longitudinally at acute, 3-month, and 6-month post-stroke stages using DTI [tract-based spatial statistics (TBSS) and deterministic tractography (DT)] and the Scale and Rhythm subtests of the Montreal Battery of Evaluation of Amusia (MBEA). Non-recovered amusia was associated with structural damage and subsequent degeneration in multiple WM tracts including the right inferior fronto-occipital fasciculus (IFOF), arcuate fasciculus (AF), inferior longitudinal fasciculus (ILF), uncinate fasciculus (UF), and frontal aslant tract (FAT), as well as in the corpus callosum (CC) and its posterior part (tapetum). In a linear regression analysis, the volume of the right IFOF was the main predictor of MBEA performance across time. Overall, our results provide a comprehensive picture of the large-scale deficits in intra- and interhemispheric structural connectivity underlying amusia, and conversely highlight which pathways are crucial for normal music perception. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Strategies for memory-based decision making: Modeling behavioral and neural signatures within a cognitive architecture.

    Science.gov (United States)

    Fechner, Hanna B; Pachur, Thorsten; Schooler, Lael J; Mehlhorn, Katja; Battal, Ceren; Volz, Kirsten G; Borst, Jelmer P

    2016-12-01

    How do people use memories to make inferences about real-world objects? We tested three strategies based on predicted patterns of response times and blood-oxygen-level-dependent (BOLD) responses: one strategy that relies solely on recognition memory, a second that retrieves additional knowledge, and a third, lexicographic (i.e., sequential) strategy, that considers knowledge conditionally on the evidence obtained from recognition memory. We implemented the strategies as computational models within the Adaptive Control of Thought-Rational (ACT-R) cognitive architecture, which allowed us to derive behavioral and neural predictions that we then compared to the results of a functional magnetic resonance imaging (fMRI) study in which participants inferred which of two cities is larger. Overall, versions of the lexicographic strategy, according to which knowledge about many but not all alternatives is searched, provided the best account of the joint patterns of response times and BOLD responses. These results provide insights into the interplay between recognition and additional knowledge in memory, hinting at an adaptive use of these two sources of information in decision making. The results highlight the usefulness of implementing models of decision making within a cognitive architecture to derive predictions on the behavioral and neural level. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Performance Evaluation of 14 Neural Network Architectures Used for Predicting Heat Transfer Characteristics of Engine Oils

    Science.gov (United States)

    Al-Ajmi, R. M.; Abou-Ziyan, H. Z.; Mahmoud, M. A.

    2012-01-01

    This paper reports the results of a comprehensive study that aimed at identifying best neural network architecture and parameters to predict subcooled boiling characteristics of engine oils. A total of 57 different neural networks (NNs) that were derived from 14 different NN architectures were evaluated for four different prediction cases. The NNs were trained on experimental datasets performed on five engine oils of different chemical compositions. The performance of each NN was evaluated using a rigorous statistical analysis as well as careful examination of smoothness of predicted boiling curves. One NN, out of the 57 evaluated, correctly predicted the boiling curves for all cases considered either for individual oils or for all oils taken together. It was found that the pattern selection and weight update techniques strongly affect the performance of the NNs. It was also revealed that the use of descriptive statistical analysis such as R2, mean error, standard deviation, and T and slope tests, is a necessary but not sufficient condition for evaluating NN performance. The performance criteria should also include inspection of the smoothness of the predicted curves either visually or by plotting the slopes of these curves.

  3. A Neural Signature Encoding Decisions under Perceptual Ambiguity.

    Science.gov (United States)

    Sun, Sai; Yu, Rongjun; Wang, Shuo

    2017-01-01

    People often make perceptual decisions with ambiguous information, but it remains unclear whether the brain has a common neural substrate that encodes various forms of perceptual ambiguity. Here, we used three types of perceptually ambiguous stimuli as well as task instructions to examine the neural basis for both stimulus-driven and task-driven perceptual ambiguity. We identified a neural signature, the late positive potential (LPP), that encoded a general form of stimulus-driven perceptual ambiguity. In addition to stimulus-driven ambiguity, the LPP was also modulated by ambiguity in task instructions. To further specify the functional role of the LPP and elucidate the relationship between stimulus ambiguity, behavioral response, and the LPP, we employed regression models and found that the LPP was specifically associated with response latency and confidence rating, suggesting that the LPP encoded decisions under perceptual ambiguity. Finally, direct behavioral ratings of stimulus and task ambiguity confirmed our neurophysiological findings, which could not be attributed to differences in eye movements either. Together, our findings argue for a common neural signature that encodes decisions under perceptual ambiguity but is subject to the modulation of task ambiguity. Our results represent an essential first step toward a complete neural understanding of human perceptual decision making.

  4. Artificial Neural Networks as an Architectural Design Tool-Generating New Detail Forms Based On the Roman Corinthian Order Capital

    Science.gov (United States)

    Radziszewski, Kacper

    2017-10-01

    The following paper presents the results of the research in the field of the machine learning, investigating the scope of application of the artificial neural networks algorithms as a tool in architectural design. The computational experiment was held using the backward propagation of errors method of training the artificial neural network, which was trained based on the geometry of the details of the Roman Corinthian order capital. During the experiment, as an input training data set, five local geometry parameters combined has given the best results: Theta, Pi, Rho in spherical coordinate system based on the capital volume centroid, followed by Z value of the Cartesian coordinate system and a distance from vertical planes created based on the capital symmetry. Additionally during the experiment, artificial neural network hidden layers optimal count and structure was found, giving results of the error below 0.2% for the mentioned before input parameters. Once successfully trained artificial network, was able to mimic the details composition on any other geometry type given. Despite of calculating the transformed geometry locally and separately for each of the thousands of surface points, system could create visually attractive and diverse, complex patterns. Designed tool, based on the supervised learning method of machine learning, gives possibility of generating new architectural forms- free of the designer’s imagination bounds. Implementing the infinitely broad computational methods of machine learning, or Artificial Intelligence in general, not only could accelerate and simplify the design process, but give an opportunity to explore never seen before, unpredictable forms or everyday architectural practice solutions.

  5. Exploring multiple feature combination strategies with a recurrent neural network architecture for off-line handwriting recognition

    Science.gov (United States)

    Mioulet, L.; Bideault, G.; Chatelain, C.; Paquet, T.; Brunessaux, S.

    2015-01-01

    The BLSTM-CTC is a novel recurrent neural network architecture that has outperformed previous state of the art algorithms in tasks such as speech recognition or handwriting recognition. It has the ability to process long term dependencies in temporal signals in order to label unsegmented data. This paper describes different ways of combining features using a BLSTM-CTC architecture. Not only do we explore the low level combination (feature space combination) but we also explore high level combination (decoding combination) and mid-level (internal system representation combination). The results are compared on the RIMES word database. Our results show that the low level combination works best, thanks to the powerful data modeling of the LSTM neurons.

  6. Synaptic E-I Balance Underlies Efficient Neural Coding.

    Science.gov (United States)

    Zhou, Shanglin; Yu, Yuguo

    2018-01-01

    Both theoretical and experimental evidence indicate that synaptic excitation and inhibition in the cerebral cortex are well-balanced during the resting state and sensory processing. Here, we briefly summarize the evidence for how neural circuits are adjusted to achieve this balance. Then, we discuss how such excitatory and inhibitory balance shapes stimulus representation and information propagation, two basic functions of neural coding. We also point out the benefit of adopting such a balance during neural coding. We conclude that excitatory and inhibitory balance may be a fundamental mechanism underlying efficient coding.

  7. Artificial Neural Networks for differential diagnosis of breast lesions in MR-Mammography: A systematic approach addressing the influence of network architecture on diagnostic performance using a large clinical database

    International Nuclear Information System (INIS)

    Dietzel, Matthias; Baltzer, Pascal A.T.; Dietzel, Andreas; Zoubi, Ramy; Gröschel, Tobias; Burmeister, Hartmut P.; Bogdan, Martin; Kaiser, Werner A.

    2012-01-01

    Rationale and objectives: Differential diagnosis of lesions in MR-Mammography (MRM) remains a complex task. The aim of this MRM study was to design and to test robustness of Artificial Neural Network architectures to predict malignancy using a large clinical database. Materials and methods: For this IRB-approved investigation standardized protocols and study design were applied (T1w-FLASH; 0.1 mmol/kgBW Gd-DTPA; T2w-TSE; histological verification after MRM). All lesions were evaluated by two experienced (>500 MRM) radiologists in consensus. In every lesion, 18 previously published descriptors were assessed and documented in the database. An Artificial Neural Network (ANN) was developed to process this database (The-MathWorks/Inc., feed-forward-architecture/resilient back-propagation-algorithm). All 18 descriptors were set as input variables, whereas histological results (malignant vs. benign) was defined as classification variable. Initially, the ANN was optimized in terms of “Training Epochs” (TE), “Hidden Layers” (HL), “Learning Rate” (LR) and “Neurons” (N). Robustness of the ANN was addressed by repeated evaluation cycles (n: 9) with receiver operating characteristics (ROC) analysis of the results applying 4-fold Cross Validation. The best network architecture was identified comparing the corresponding Area under the ROC curve (AUC). Results: Histopathology revealed 436 benign and 648 malignant lesions. Enhancing the level of complexity could not increase diagnostic accuracy of the network (P: n.s.). The optimized ANN architecture (TE: 20, HL: 1, N: 5, LR: 1.2) was accurate (mean-AUC 0.888; P: <0.001) and robust (CI: 0.885–0.892; range: 0.880–0.898). Conclusion: The optimized neural network showed robust performance and high diagnostic accuracy for prediction of malignancy on unknown data.

  8. Artificial Neural Networks for differential diagnosis of breast lesions in MR-Mammography: a systematic approach addressing the influence of network architecture on diagnostic performance using a large clinical database.

    Science.gov (United States)

    Dietzel, Matthias; Baltzer, Pascal A T; Dietzel, Andreas; Zoubi, Ramy; Gröschel, Tobias; Burmeister, Hartmut P; Bogdan, Martin; Kaiser, Werner A

    2012-07-01

    Differential diagnosis of lesions in MR-Mammography (MRM) remains a complex task. The aim of this MRM study was to design and to test robustness of Artificial Neural Network architectures to predict malignancy using a large clinical database. For this IRB-approved investigation standardized protocols and study design were applied (T1w-FLASH; 0.1 mmol/kgBW Gd-DTPA; T2w-TSE; histological verification after MRM). All lesions were evaluated by two experienced (>500 MRM) radiologists in consensus. In every lesion, 18 previously published descriptors were assessed and documented in the database. An Artificial Neural Network (ANN) was developed to process this database (The-MathWorks/Inc., feed-forward-architecture/resilient back-propagation-algorithm). All 18 descriptors were set as input variables, whereas histological results (malignant vs. benign) was defined as classification variable. Initially, the ANN was optimized in terms of "Training Epochs" (TE), "Hidden Layers" (HL), "Learning Rate" (LR) and "Neurons" (N). Robustness of the ANN was addressed by repeated evaluation cycles (n: 9) with receiver operating characteristics (ROC) analysis of the results applying 4-fold Cross Validation. The best network architecture was identified comparing the corresponding Area under the ROC curve (AUC). Histopathology revealed 436 benign and 648 malignant lesions. Enhancing the level of complexity could not increase diagnostic accuracy of the network (P: n.s.). The optimized ANN architecture (TE: 20, HL: 1, N: 5, LR: 1.2) was accurate (mean-AUC 0.888; P: <0.001) and robust (CI: 0.885-0.892; range: 0.880-0.898). The optimized neural network showed robust performance and high diagnostic accuracy for prediction of malignancy on unknown data. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  9. A Cognitive Neural Architecture Able to Learn and Communicate through Natural Language.

    Directory of Open Access Journals (Sweden)

    Bruno Golosio

    Full Text Available Communicative interactions involve a kind of procedural knowledge that is used by the human brain for processing verbal and nonverbal inputs and for language production. Although considerable work has been done on modeling human language abilities, it has been difficult to bring them together to a comprehensive tabula rasa system compatible with current knowledge of how verbal information is processed in the brain. This work presents a cognitive system, entirely based on a large-scale neural architecture, which was developed to shed light on the procedural knowledge involved in language elaboration. The main component of this system is the central executive, which is a supervising system that coordinates the other components of the working memory. In our model, the central executive is a neural network that takes as input the neural activation states of the short-term memory and yields as output mental actions, which control the flow of information among the working memory components through neural gating mechanisms. The proposed system is capable of learning to communicate through natural language starting from tabula rasa, without any a priori knowledge of the structure of phrases, meaning of words, role of the different classes of words, only by interacting with a human through a text-based interface, using an open-ended incremental learning process. It is able to learn nouns, verbs, adjectives, pronouns and other word classes, and to use them in expressive language. The model was validated on a corpus of 1587 input sentences, based on literature on early language assessment, at the level of about 4-years old child, and produced 521 output sentences, expressing a broad range of language processing functionalities.

  10. Neural plasticity of development and learning.

    Science.gov (United States)

    Galván, Adriana

    2010-06-01

    Development and learning are powerful agents of change across the lifespan that induce robust structural and functional plasticity in neural systems. An unresolved question in developmental cognitive neuroscience is whether development and learning share the same neural mechanisms associated with experience-related neural plasticity. In this article, I outline the conceptual and practical challenges of this question, review insights gleaned from adult studies, and describe recent strides toward examining this topic across development using neuroimaging methods. I suggest that development and learning are not two completely separate constructs and instead, that they exist on a continuum. While progressive and regressive changes are central to both, the behavioral consequences associated with these changes are closely tied to the existing neural architecture of maturity of the system. Eventually, a deeper, more mechanistic understanding of neural plasticity will shed light on behavioral changes across development and, more broadly, about the underlying neural basis of cognition. (c) 2010 Wiley-Liss, Inc.

  11. Brain architecture: a design for natural computation.

    Science.gov (United States)

    Kaiser, Marcus

    2007-12-15

    Fifty years ago, John von Neumann compared the architecture of the brain with that of the computers he invented and which are still in use today. In those days, the organization of computers was based on concepts of brain organization. Here, we give an update on current results on the global organization of neural systems. For neural systems, we outline how the spatial and topological architecture of neuronal and cortical networks facilitates robustness against failures, fast processing and balanced network activation. Finally, we discuss mechanisms of self-organization for such architectures. After all, the organization of the brain might again inspire computer architecture.

  12. A brick-architecture-based mobile under-vehicle inspection system

    Science.gov (United States)

    Qian, Cheng; Page, David; Koschan, Andreas; Abidi, Mongi

    2005-05-01

    In this paper, a mobile scanning system for real-time under-vehicle inspection is presented, which is founded on a "Brick" architecture. In this "Brick" architecture, the inspection system is basically decomposed into bricks of three kinds: sensing, mobility, and computing. These bricks are physically and logically independent and communicate with each other by wireless communication. Each brick is mainly composed by five modules: data acquisition, data processing, data transmission, power, and self-management. These five modules can be further decomposed into submodules where the function and the interface are well-defined. Based on this architecture, the system is built by four bricks: two sensing bricks consisting of a range scanner and a line CCD, one mobility brick, and one computing brick. The sensing bricks capture geometric data and texture data of the under-vehicle scene, while the mobility brick provides positioning data along the motion path. Data of these three modalities are transmitted to the computing brick where they are fused and reconstruct a 3D under-vehicle model for visualization and danger inspection. This system has been successfully used in several military applications and proved to be an effective safer method for national security.

  13. Resting State fMRI Functional Connectivity-Based Classification Using a Convolutional Neural Network Architecture.

    Science.gov (United States)

    Meszlényi, Regina J; Buza, Krisztian; Vidnyánszky, Zoltán

    2017-01-01

    Machine learning techniques have become increasingly popular in the field of resting state fMRI (functional magnetic resonance imaging) network based classification. However, the application of convolutional networks has been proposed only very recently and has remained largely unexplored. In this paper we describe a convolutional neural network architecture for functional connectome classification called connectome-convolutional neural network (CCNN). Our results on simulated datasets and a publicly available dataset for amnestic mild cognitive impairment classification demonstrate that our CCNN model can efficiently distinguish between subject groups. We also show that the connectome-convolutional network is capable to combine information from diverse functional connectivity metrics and that models using a combination of different connectivity descriptors are able to outperform classifiers using only one metric. From this flexibility follows that our proposed CCNN model can be easily adapted to a wide range of connectome based classification or regression tasks, by varying which connectivity descriptor combinations are used to train the network.

  14. The benefit of combining a deep neural network architecture with ideal ratio mask estimation in computational speech segregation to improve speech intelligibility

    DEFF Research Database (Denmark)

    Bentsen, Thomas; May, Tobias; Kressner, Abigail Anne

    2018-01-01

    Computational speech segregation attempts to automatically separate speech from noise. This is challenging in conditions with interfering talkers and low signal-to-noise ratios. Recent approaches have adopted deep neural networks and successfully demonstrated speech intelligibility improvements....... A selection of components may be responsible for the success with these state-of-the-art approaches: the system architecture, a time frame concatenation technique and the learning objective. The aim of this study was to explore the roles and the relative contributions of these components by measuring speech......, to a state-of-the-art deep neural network-based architecture. Another improvement of 13.9 percentage points was obtained by changing the learning objective from the ideal binary mask, in which individual time-frequency units are labeled as either speech- or noise-dominated, to the ideal ratio mask, where...

  15. Tensor Basis Neural Network v. 1.0 (beta)

    Energy Technology Data Exchange (ETDEWEB)

    2017-03-28

    This software package can be used to build, train, and test a neural network machine learning model. The neural network architecture is specifically designed to embed tensor invariance properties by enforcing that the model predictions sit on an invariant tensor basis. This neural network architecture can be used in developing constitutive models for applications such as turbulence modeling, materials science, and electromagnetism.

  16. Neural-Network Object-Recognition Program

    Science.gov (United States)

    Spirkovska, L.; Reid, M. B.

    1993-01-01

    HONTIOR computer program implements third-order neural network exhibiting invariance under translation, change of scale, and in-plane rotation. Invariance incorporated directly into architecture of network. Only one view of each object needed to train network for two-dimensional-translation-invariant recognition of object. Also used for three-dimensional-transformation-invariant recognition by training network on only set of out-of-plane rotated views. Written in C language.

  17. A loop-based neural architecture for structured behavior encoding and decoding.

    Science.gov (United States)

    Gisiger, Thomas; Boukadoum, Mounir

    2018-02-01

    We present a new type of artificial neural network that generalizes on anatomical and dynamical aspects of the mammal brain. Its main novelty lies in its topological structure which is built as an array of interacting elementary motifs shaped like loops. These loops come in various types and can implement functions such as gating, inhibitory or executive control, or encoding of task elements to name a few. Each loop features two sets of neurons and a control region, linked together by non-recurrent projections. The two neural sets do the bulk of the loop's computations while the control unit specifies the timing and the conditions under which the computations implemented by the loop are to be performed. By functionally linking many such loops together, a neural network is obtained that may perform complex cognitive computations. To demonstrate the potential offered by such a system, we present two neural network simulations. The first illustrates the structure and dynamics of a single loop implementing a simple gating mechanism. The second simulation shows how connecting four loops in series can produce neural activity patterns that are sufficient to pass a simplified delayed-response task. We also show that this network reproduces electrophysiological measurements gathered in various regions of the brain of monkeys performing similar tasks. We also demonstrate connections between this type of neural network and recurrent or long short-term memory network models, and suggest ways to generalize them for future artificial intelligence research. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Seafloor classification using artificial neural network architecture from central western continental shelf of India

    Science.gov (United States)

    Mahale, Vasudev; Chakraborty, Bishwajit; Navelkar, Gajanan S.; Prabhu Desai, R. G.

    2005-04-01

    Seafloor classification studies are carried out at the central western continental shelf of India employing two frequency normal incidence single beam echo-sounder backscatter data. Echo waveform data from different seafloor sediment areas are utilized for present study. Three artificial neural network (ANN) architectures, e.g., Self-Organization Feature Maps (SOFM), Multi-Layer Perceptron (MLP), and Learning Vector Quantization (LVQ) are applied for seafloor classifications. In case of MLP, features are extracted from the received echo signal, on the basis of which, classification is carried out. In the case of the SOFM, a simple moving average echo waveform pre-processing technique is found to yield excellent classification results. Finally, LVQ, which is known as ANN of hybrid architecture is found to be the efficient seafloor classifier especially from the point of view of the real-time application. The simultaneously acquired sediment sample, multi-beam bathymetry and side scan sonar and echo waveform based seafloor classifications results are indicative of the depositional (inner shelf), non-depositional or erosion (outer shelf) environment and combination of both in the transition zone. [Work supported by DIT.

  19. Livermore Big Artificial Neural Network Toolkit

    Energy Technology Data Exchange (ETDEWEB)

    2016-07-01

    LBANN is a toolkit that is designed to train artificial neural networks efficiently on high performance computing architectures. It is optimized to take advantages of key High Performance Computing features to accelerate neural network training. Specifically it is optimized for low-latency, high bandwidth interconnects, node-local NVRAM, node-local GPU accelerators, and high bandwidth parallel file systems. It is built on top of the open source Elemental distributed-memory dense and spars-direct linear algebra and optimization library that is released under the BSD license. The algorithms contained within LBANN are drawn from the academic literature and implemented to work within a distributed-memory framework.

  20. Stochastic Spiking Neural Networks Enabled by Magnetic Tunnel Junctions: From Nontelegraphic to Telegraphic Switching Regimes

    Science.gov (United States)

    Liyanagedera, Chamika M.; Sengupta, Abhronil; Jaiswal, Akhilesh; Roy, Kaushik

    2017-12-01

    Stochastic spiking neural networks based on nanoelectronic spin devices can be a possible pathway to achieving "brainlike" compact and energy-efficient cognitive intelligence. The computational model attempt to exploit the intrinsic device stochasticity of nanoelectronic synaptic or neural components to perform learning or inference. However, there has been limited analysis on the scaling effect of stochastic spin devices and its impact on the operation of such stochastic networks at the system level. This work attempts to explore the design space and analyze the performance of nanomagnet-based stochastic neuromorphic computing architectures for magnets with different barrier heights. We illustrate how the underlying network architecture must be modified to account for the random telegraphic switching behavior displayed by magnets with low barrier heights as they are scaled into the superparamagnetic regime. We perform a device-to-system-level analysis on a deep neural-network architecture for a digit-recognition problem on the MNIST data set.

  1. Neural networks and orbit control in accelerators

    International Nuclear Information System (INIS)

    Bozoki, E.; Friedman, A.

    1994-01-01

    An overview of the architecture, workings and training of Neural Networks is given. We stress the aspects which are important for the use of Neural Networks for orbit control in accelerators and storage rings, especially its ability to cope with the nonlinear behavior of the orbit response to 'kicks' and the slow drift in the orbit response during long-term operation. Results obtained for the two NSLS storage rings with several network architectures and various training methods for each architecture are given

  2. Neural chips, neural computers and application in high and superhigh energy physics experiments

    International Nuclear Information System (INIS)

    Nikityuk, N.M.; )

    2001-01-01

    Architecture peculiarity and characteristics of series of neural chips and neural computes used in scientific instruments are considered. Tendency of development and use of them in high energy and superhigh energy physics experiments are described. Comparative data which characterize the efficient use of neural chips for useful event selection, classification elementary particles, reconstruction of tracks of charged particles and for search of hypothesis Higgs particles are given. The characteristics of native neural chips and accelerated neural boards are considered [ru

  3. Classification of behavior using unsupervised temporal neural networks

    International Nuclear Information System (INIS)

    Adair, K.L.

    1998-03-01

    Adding recurrent connections to unsupervised neural networks used for clustering creates a temporal neural network which clusters a sequence of inputs as they appear over time. The model presented combines the Jordan architecture with the unsupervised learning technique Adaptive Resonance Theory, Fuzzy ART. The combination yields a neural network capable of quickly clustering sequential pattern sequences as the sequences are generated. The applicability of the architecture is illustrated through a facility monitoring problem

  4. Genetic optimization of neural network architecture

    International Nuclear Information System (INIS)

    Harp, S.A.; Samad, T.

    1994-03-01

    Neural networks are now a popular technology for a broad variety of application domains, including the electric utility industry. Yet, as the technology continues to gain increasing acceptance, it is also increasingly apparent that the power that neural networks provide is not an unconditional blessing. Considerable care must be exercised during application development if the full benefit of the technology is to be realized. At present, no fully general theory or methodology for neural network design is available, and application development is a trial-and-error process that is time-consuming and expertise-intensive. Each application demands appropriate selections of the network input space, the network structure, and values of learning algorithm parameters-design choices that are closely coupled in ways that largely remain a mystery. This EPRI-funded exploratory research project was initiated to take the key next step in this research program: the validation of the approach on a realistic problem. We focused on the problem of modeling the thermal performance of the TVA Sequoyah nuclear power plant (units 1 and 2)

  5. Parallel consensual neural networks.

    Science.gov (United States)

    Benediktsson, J A; Sveinsson, J R; Ersoy, O K; Swain, P H

    1997-01-01

    A new type of a neural-network architecture, the parallel consensual neural network (PCNN), is introduced and applied in classification/data fusion of multisource remote sensing and geographic data. The PCNN architecture is based on statistical consensus theory and involves using stage neural networks with transformed input data. The input data are transformed several times and the different transformed data are used as if they were independent inputs. The independent inputs are first classified using the stage neural networks. The output responses from the stage networks are then weighted and combined to make a consensual decision. In this paper, optimization methods are used in order to weight the outputs from the stage networks. Two approaches are proposed to compute the data transforms for the PCNN, one for binary data and another for analog data. The analog approach uses wavelet packets. The experimental results obtained with the proposed approach show that the PCNN outperforms both a conjugate-gradient backpropagation neural network and conventional statistical methods in terms of overall classification accuracy of test data.

  6. Brain architecture: A design for natural computation

    OpenAIRE

    Kaiser, Marcus

    2008-01-01

    Fifty years ago, John von Neumann compared the architecture of the brain with that of computers that he invented and which is still in use today. In those days, the organisation of computers was based on concepts of brain organisation. Here, we give an update on current results on the global organisation of neural systems. For neural systems, we outline how the spatial and topological architecture of neuronal and cortical networks facilitates robustness against failures, fast processing, and ...

  7. Genetic architecture underlying convergent evolution of egg-laying behavior in a seed-feeding beetle.

    Science.gov (United States)

    Fox, Charles W; Wagner, James D; Cline, Sara; Thomas, Frances Ann; Messina, Frank J

    2009-05-01

    Independent populations subjected to similar environments often exhibit convergent evolution. An unresolved question is the frequency with which such convergence reflects parallel genetic mechanisms. We examined the convergent evolution of egg-laying behavior in the seed-feeding beetle Callosobruchus maculatus. Females avoid ovipositing on seeds bearing conspecific eggs, but the degree of host discrimination varies among geographic populations. In a previous experiment, replicate lines switched from a small host to a large one evolved reduced discrimination after 40 generations. We used line crosses to determine the genetic architecture underlying this rapid response. The most parsimonious genetic models included dominance and/or epistasis for all crosses. The genetic architecture underlying reduced discrimination in two lines was not significantly different from the architecture underlying differences between geographic populations, but the architecture underlying the divergence of a third line differed from all others. We conclude that convergence of this complex trait may in some cases involve parallel genetic mechanisms.

  8. Do neural nets learn statistical laws behind natural language?

    Directory of Open Access Journals (Sweden)

    Shuntaro Takahashi

    Full Text Available The performance of deep learning in natural language processing has been spectacular, but the reasons for this success remain unclear because of the inherent complexity of deep learning. This paper provides empirical evidence of its effectiveness and of a limitation of neural networks for language engineering. Precisely, we demonstrate that a neural language model based on long short-term memory (LSTM effectively reproduces Zipf's law and Heaps' law, two representative statistical properties underlying natural language. We discuss the quality of reproducibility and the emergence of Zipf's law and Heaps' law as training progresses. We also point out that the neural language model has a limitation in reproducing long-range correlation, another statistical property of natural language. This understanding could provide a direction for improving the architectures of neural networks.

  9. Artificial neural networks a practical course

    CERN Document Server

    da Silva, Ivan Nunes; Andrade Flauzino, Rogerio; Liboni, Luisa Helena Bartocci; dos Reis Alves, Silas Franco

    2017-01-01

    This book provides comprehensive coverage of neural networks, their evolution, their structure, the problems they can solve, and their applications. The first half of the book looks at theoretical investigations on artificial neural networks and addresses the key architectures that are capable of implementation in various application scenarios. The second half is designed specifically for the production of solutions using artificial neural networks to solve practical problems arising from different areas of knowledge. It also describes the various implementation details that were taken into account to achieve the reported results. These aspects contribute to the maturation and improvement of experimental techniques to specify the neural network architecture that is most appropriate for a particular application scope. The book is appropriate for students in graduate and upper undergraduate courses in addition to researchers and professionals.

  10. The benefit of combining a deep neural network architecture with ideal ratio mask estimation in computational speech segregation to improve speech intelligibility.

    Science.gov (United States)

    Bentsen, Thomas; May, Tobias; Kressner, Abigail A; Dau, Torsten

    2018-01-01

    Computational speech segregation attempts to automatically separate speech from noise. This is challenging in conditions with interfering talkers and low signal-to-noise ratios. Recent approaches have adopted deep neural networks and successfully demonstrated speech intelligibility improvements. A selection of components may be responsible for the success with these state-of-the-art approaches: the system architecture, a time frame concatenation technique and the learning objective. The aim of this study was to explore the roles and the relative contributions of these components by measuring speech intelligibility in normal-hearing listeners. A substantial improvement of 25.4 percentage points in speech intelligibility scores was found going from a subband-based architecture, in which a Gaussian Mixture Model-based classifier predicts the distributions of speech and noise for each frequency channel, to a state-of-the-art deep neural network-based architecture. Another improvement of 13.9 percentage points was obtained by changing the learning objective from the ideal binary mask, in which individual time-frequency units are labeled as either speech- or noise-dominated, to the ideal ratio mask, where the units are assigned a continuous value between zero and one. Therefore, both components play significant roles and by combining them, speech intelligibility improvements were obtained in a six-talker condition at a low signal-to-noise ratio.

  11. Linking Neural and Symbolic Representation and Processing of Conceptual Structures

    Directory of Open Access Journals (Sweden)

    Frank van der Velde

    2017-08-01

    Full Text Available We compare and discuss representations in two cognitive architectures aimed at representing and processing complex conceptual (sentence-like structures. First is the Neural Blackboard Architecture (NBA, which aims to account for representation and processing of complex and combinatorial conceptual structures in the brain. Second is IDyOT (Information Dynamics of Thinking, which derives sentence-like structures by learning statistical sequential regularities over a suitable corpus. Although IDyOT is designed at a level more abstract than the neural, so it is a model of cognitive function, rather than neural processing, there are strong similarities between the composite structures developed in IDyOT and the NBA. We hypothesize that these similarities form the basis of a combined architecture in which the individual strengths of each architecture are integrated. We outline and discuss the characteristics of this combined architecture, emphasizing the representation and processing of conceptual structures.

  12. A Mobile Asset Tracking System Architecture under Mobile-Stationary Co-Existing WSNs

    Science.gov (United States)

    Kim, Tae Hyon; Jo, Hyeong Gon; Lee, Jae Shin; Kang, Soon Ju

    2012-01-01

    The tracking of multiple wireless mobile nodes is not easy with current legacy WSN technologies, due to their inherent technical complexity, especially when heavy traffic and frequent movement of mobile nodes are encountered. To enable mobile asset tracking under these legacy WSN systems, it is necessary to design a specific system architecture that can manage numerous mobile nodes attached to mobile assets. In this paper, we present a practical system architecture including a communication protocol, a three-tier network, and server-side middleware for mobile asset tracking in legacy WSNs consisting of mobile-stationary co-existing infrastructures, and we prove the functionality of this architecture through careful evaluation in a test bed. Evaluation was carried out in a microwave anechoic chamber as well as on a straight road near our office. We evaluated communication mobility performance between mobile and stationary nodes, location-awareness performance, system stability under numerous mobile node conditions, and the successful packet transfer rate according to the speed of the mobile nodes. The results indicate that the proposed architecture is sufficiently robust for application in realistic mobile asset tracking services that require a large number of mobile nodes. PMID:23242277

  13. A Mobile Asset Tracking System Architecture under Mobile-Stationary Co-Existing WSNs

    Directory of Open Access Journals (Sweden)

    Soon Ju Kang

    2012-12-01

    Full Text Available The tracking of multiple wireless mobile nodes is not easy with current legacy WSN technologies, due to their inherent technical complexity, especially when heavy traffic and frequent movement of mobile nodes are encountered. To enable mobile asset tracking under these legacy WSN systems, it is necessary to design a specific system architecture that can manage numerous mobile nodes attached to mobile assets. In this paper, we present a practical system architecture including a communication protocol, a three-tier network, and server-side middleware for mobile asset tracking in legacy WSNs consisting of mobile-stationary co-existing infrastructures, and we prove the functionality of this architecture through careful evaluation in a test bed. Evaluation was carried out in a microwave anechoic chamber as well as on a straight road near our office. We evaluated communication mobility performance between mobile and stationary nodes, location-awareness performance, system stability under numerous mobile node conditions, and the successful packet transfer rate according to the speed of the mobile nodes. The results indicate that the proposed architecture is sufficiently robust for application in realistic mobile asset tracking services that require a large number of mobile nodes.

  14. Analytic Treatment of Deep Neural Networks Under Additive Gaussian Noise

    KAUST Repository

    Alfadly, Modar

    2018-01-01

    Despite the impressive performance of deep neural networks (DNNs) on numerous vision tasks, they still exhibit yet-to-understand uncouth behaviours. One puzzling behaviour is the reaction of DNNs to various noise attacks, where it has been shown that there exist small adversarial noise that can result in a severe degradation in the performance of DNNs. To rigorously treat this, we derive exact analytic expressions for the first and second moments (mean and variance) of a small piecewise linear (PL) network with a single rectified linear unit (ReLU) layer subject to general Gaussian input. We experimentally show that these expressions are tight under simple linearizations of deeper PL-DNNs, especially popular architectures in the literature (e.g. LeNet and AlexNet). Extensive experiments on image classification show that these expressions can be used to study the behaviour of the output mean of the logits for each class, the inter-class confusion and the pixel-level spatial noise sensitivity of the network. Moreover, we show how these expressions can be used to systematically construct targeted and non-targeted adversarial attacks. Then, we proposed a special estimator DNN, named mixture of linearizations (MoL), and derived the analytic expressions for its output mean and variance, as well. We employed these expressions to train the model to be particularly robust against Gaussian attacks without the need for data augmentation. Upon training this network on a loss that is consolidated with the derived output probabilistic moments, the network is not only robust under very high variance Gaussian attacks but is also as robust as networks that are trained with 20 fold data augmentation.

  15. Analytic Treatment of Deep Neural Networks Under Additive Gaussian Noise

    KAUST Repository

    Alfadly, Modar M.

    2018-04-12

    Despite the impressive performance of deep neural networks (DNNs) on numerous vision tasks, they still exhibit yet-to-understand uncouth behaviours. One puzzling behaviour is the reaction of DNNs to various noise attacks, where it has been shown that there exist small adversarial noise that can result in a severe degradation in the performance of DNNs. To rigorously treat this, we derive exact analytic expressions for the first and second moments (mean and variance) of a small piecewise linear (PL) network with a single rectified linear unit (ReLU) layer subject to general Gaussian input. We experimentally show that these expressions are tight under simple linearizations of deeper PL-DNNs, especially popular architectures in the literature (e.g. LeNet and AlexNet). Extensive experiments on image classification show that these expressions can be used to study the behaviour of the output mean of the logits for each class, the inter-class confusion and the pixel-level spatial noise sensitivity of the network. Moreover, we show how these expressions can be used to systematically construct targeted and non-targeted adversarial attacks. Then, we proposed a special estimator DNN, named mixture of linearizations (MoL), and derived the analytic expressions for its output mean and variance, as well. We employed these expressions to train the model to be particularly robust against Gaussian attacks without the need for data augmentation. Upon training this network on a loss that is consolidated with the derived output probabilistic moments, the network is not only robust under very high variance Gaussian attacks but is also as robust as networks that are trained with 20 fold data augmentation.

  16. Reynolds averaged turbulence modelling using deep neural networks with embedded invariance

    International Nuclear Information System (INIS)

    Ling, Julia; Kurzawski, Andrew; Templeton, Jeremy

    2016-01-01

    There exists significant demand for improved Reynolds-averaged Navier–Stokes (RANS) turbulence models that are informed by and can represent a richer set of turbulence physics. This paper presents a method of using deep neural networks to learn a model for the Reynolds stress anisotropy tensor from high-fidelity simulation data. A novel neural network architecture is proposed which uses a multiplicative layer with an invariant tensor basis to embed Galilean invariance into the predicted anisotropy tensor. It is demonstrated that this neural network architecture provides improved prediction accuracy compared with a generic neural network architecture that does not embed this invariance property. Furthermore, the Reynolds stress anisotropy predictions of this invariant neural network are propagated through to the velocity field for two test cases. For both test cases, significant improvement versus baseline RANS linear eddy viscosity and nonlinear eddy viscosity models is demonstrated.

  17. Neural Global Pattern Similarity Underlies True and False Memories.

    Science.gov (United States)

    Ye, Zhifang; Zhu, Bi; Zhuang, Liping; Lu, Zhonglin; Chen, Chuansheng; Xue, Gui

    2016-06-22

    The neural processes giving rise to human memory strength signals remain poorly understood. Inspired by formal computational models that posit a central role of global matching in memory strength, we tested a novel hypothesis that the strengths of both true and false memories arise from the global similarity of an item's neural activation pattern during retrieval to that of all the studied items during encoding (i.e., the encoding-retrieval neural global pattern similarity [ER-nGPS]). We revealed multiple ER-nGPS signals that carried distinct information and contributed differentially to true and false memories: Whereas the ER-nGPS in the parietal regions reflected semantic similarity and was scaled with the recognition strengths of both true and false memories, ER-nGPS in the visual cortex contributed solely to true memory. Moreover, ER-nGPS differences between the parietal and visual cortices were correlated with frontal monitoring processes. By combining computational and neuroimaging approaches, our results advance a mechanistic understanding of memory strength in recognition. What neural processes give rise to memory strength signals, and lead to our conscious feelings of familiarity? Using fMRI, we found that the memory strength of a given item depends not only on how it was encoded during learning, but also on the similarity of its neural representation with other studied items. The global neural matching signal, mainly in the parietal lobule, could account for the memory strengths of both studied and unstudied items. Interestingly, a different global matching signal, originated from the visual cortex, could distinguish true from false memories. The findings reveal multiple neural mechanisms underlying the memory strengths of events registered in the brain. Copyright © 2016 the authors 0270-6474/16/366792-11$15.00/0.

  18. Neural processes underlying cultural differences in cognitive persistence.

    Science.gov (United States)

    Telzer, Eva H; Qu, Yang; Lin, Lynda C

    2017-08-01

    Self-improvement motivation, which occurs when individuals seek to improve upon their competence by gaining new knowledge and improving upon their skills, is critical for cognitive, social, and educational adjustment. While many studies have delineated the neural mechanisms supporting extrinsic motivation induced by monetary rewards, less work has examined the neural processes that support intrinsically motivated behaviors, such as self-improvement motivation. Because cultural groups traditionally vary in terms of their self-improvement motivation, we examined cultural differences in the behavioral and neural processes underlying motivated behaviors during cognitive persistence in the absence of extrinsic rewards. In Study 1, 71 American (47 females, M=19.68 years) and 68 Chinese (38 females, M=19.37 years) students completed a behavioral cognitive control task that required cognitive persistence across time. In Study 2, 14 American and 15 Chinese students completed the same cognitive persistence task during an fMRI scan. Across both studies, American students showed significant declines in cognitive performance across time, whereas Chinese participants demonstrated effective cognitive persistence. These behavioral effects were explained by cultural differences in self-improvement motivation and paralleled by increasing activation and functional coupling between the inferior frontal gyrus (IFG) and ventral striatum (VS) across the task among Chinese participants, neural activation and coupling that remained low in American participants. These findings suggest a potential neural mechanism by which the VS and IFG work in concert to promote cognitive persistence in the absence of extrinsic rewards. Thus, frontostriatal circuitry may be a neurobiological signal representing intrinsic motivation for self-improvement that serves an adaptive function, increasing Chinese students' motivation to engage in cognitive persistence. Copyright © 2017 Elsevier Inc. All rights

  19. Neural networks and applications tutorial

    Science.gov (United States)

    Guyon, I.

    1991-09-01

    The importance of neural networks has grown dramatically during this decade. While only a few years ago they were primarily of academic interest, now dozens of companies and many universities are investigating the potential use of these systems and products are beginning to appear. The idea of building a machine whose architecture is inspired by that of the brain has roots which go far back in history. Nowadays, technological advances of computers and the availability of custom integrated circuits, permit simulations of hundreds or even thousands of neurons. In conjunction, the growing interest in learning machines, non-linear dynamics and parallel computation spurred renewed attention in artificial neural networks. Many tentative applications have been proposed, including decision systems (associative memories, classifiers, data compressors and optimizers), or parametric models for signal processing purposes (system identification, automatic control, noise canceling, etc.). While they do not always outperform standard methods, neural network approaches are already used in some real world applications for pattern recognition and signal processing tasks. The tutorial is divided into six lectures, that where presented at the Third Graduate Summer Course on Computational Physics (September 3-7, 1990) on Parallel Architectures and Applications, organized by the European Physical Society: (1) Introduction: machine learning and biological computation. (2) Adaptive artificial neurons (perceptron, ADALINE, sigmoid units, etc.): learning rules and implementations. (3) Neural network systems: architectures, learning algorithms. (4) Applications: pattern recognition, signal processing, etc. (5) Elements of learning theory: how to build networks which generalize. (6) A case study: a neural network for on-line recognition of handwritten alphanumeric characters.

  20. The Laplacian spectrum of neural networks

    Science.gov (United States)

    de Lange, Siemon C.; de Reus, Marcel A.; van den Heuvel, Martijn P.

    2014-01-01

    The brain is a complex network of neural interactions, both at the microscopic and macroscopic level. Graph theory is well suited to examine the global network architecture of these neural networks. Many popular graph metrics, however, encode average properties of individual network elements. Complementing these “conventional” graph metrics, the eigenvalue spectrum of the normalized Laplacian describes a network's structure directly at a systems level, without referring to individual nodes or connections. In this paper, the Laplacian spectra of the macroscopic anatomical neuronal networks of the macaque and cat, and the microscopic network of the Caenorhabditis elegans were examined. Consistent with conventional graph metrics, analysis of the Laplacian spectra revealed an integrative community structure in neural brain networks. Extending previous findings of overlap of network attributes across species, similarity of the Laplacian spectra across the cat, macaque and C. elegans neural networks suggests a certain level of consistency in the overall architecture of the anatomical neural networks of these species. Our results further suggest a specific network class for neural networks, distinct from conceptual small-world and scale-free models as well as several empirical networks. PMID:24454286

  1. A computational architecture for social agents

    Energy Technology Data Exchange (ETDEWEB)

    Bond, A.H. [California Institute of Technology, Pasadena, CA (United States)

    1996-12-31

    This article describes a new class of information-processing models for social agents. They axe derived from primate brain architecture, the processing in brain regions, the interactions among brain regions, and the social behavior of primates. In another paper, we have reviewed the neuroanatomical connections and functional involvements of cortical regions. We reviewed the evidence for a hierarchical architecture in the primate brain. By examining neuroanatomical evidence for connections among neural areas, we were able to establish anatomical regions and connections. We then examined evidence for specific functional involvements of the different neural axeas and found some support for hierarchical functioning, not only for the perception hierarchies but also for the planning and action hierarchy in the frontal lobes.

  2. Joint multiple fully connected convolutional neural network with extreme learning machine for hepatocellular carcinoma nuclei grading.

    Science.gov (United States)

    Li, Siqi; Jiang, Huiyan; Pang, Wenbo

    2017-05-01

    Accurate cell grading of cancerous tissue pathological image is of great importance in medical diagnosis and treatment. This paper proposes a joint multiple fully connected convolutional neural network with extreme learning machine (MFC-CNN-ELM) architecture for hepatocellular carcinoma (HCC) nuclei grading. First, in preprocessing stage, each grayscale image patch with the fixed size is obtained using center-proliferation segmentation (CPS) method and the corresponding labels are marked under the guidance of three pathologists. Next, a multiple fully connected convolutional neural network (MFC-CNN) is designed to extract the multi-form feature vectors of each input image automatically, which considers multi-scale contextual information of deep layer maps sufficiently. After that, a convolutional neural network extreme learning machine (CNN-ELM) model is proposed to grade HCC nuclei. Finally, a back propagation (BP) algorithm, which contains a new up-sample method, is utilized to train MFC-CNN-ELM architecture. The experiment comparison results demonstrate that our proposed MFC-CNN-ELM has superior performance compared with related works for HCC nuclei grading. Meanwhile, external validation using ICPR 2014 HEp-2 cell dataset shows the good generalization of our MFC-CNN-ELM architecture. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Neural networks for triggering

    International Nuclear Information System (INIS)

    Denby, B.; Campbell, M.; Bedeschi, F.; Chriss, N.; Bowers, C.; Nesti, F.

    1990-01-01

    Two types of neural network beauty trigger architectures, based on identification of electrons in jets and recognition of secondary vertices, have been simulated in the environment of the Fermilab CDF experiment. The efficiencies for B's and rejection of background obtained are encouraging. If hardware tests are successful, the electron identification architecture will be tested in the 1991 run of CDF. 10 refs., 5 figs., 1 tab

  4. Proposal for an All-Spin Artificial Neural Network: Emulating Neural and Synaptic Functionalities Through Domain Wall Motion in Ferromagnets.

    Science.gov (United States)

    Sengupta, Abhronil; Shim, Yong; Roy, Kaushik

    2016-12-01

    Non-Boolean computing based on emerging post-CMOS technologies can potentially pave the way for low-power neural computing platforms. However, existing work on such emerging neuromorphic architectures have either focused on solely mimicking the neuron, or the synapse functionality. While memristive devices have been proposed to emulate biological synapses, spintronic devices have proved to be efficient at performing the thresholding operation of the neuron at ultra-low currents. In this work, we propose an All-Spin Artificial Neural Network where a single spintronic device acts as the basic building block of the system. The device offers a direct mapping to synapse and neuron functionalities in the brain while inter-layer network communication is accomplished via CMOS transistors. To the best of our knowledge, this is the first demonstration of a neural architecture where a single nanoelectronic device is able to mimic both neurons and synapses. The ultra-low voltage operation of low resistance magneto-metallic neurons enables the low-voltage operation of the array of spintronic synapses, thereby leading to ultra-low power neural architectures. Device-level simulations, calibrated to experimental results, was used to drive the circuit and system level simulations of the neural network for a standard pattern recognition problem. Simulation studies indicate energy savings by  ∼  100× in comparison to a corresponding digital/analog CMOS neuron implementation.

  5. Neural mechanisms underlying morphine withdrawal in addicted patients: a review

    Directory of Open Access Journals (Sweden)

    Nima Babhadiashar

    2015-06-01

    Full Text Available Morphine is one of the most potent alkaloid in opium, which has substantial medical uses and needs and it is the first active principle purified from herbal source. Morphine has commonly been used for relief of moderate to severe pain as it acts directly on the central nervous system; nonetheless, its chronic abuse increases tolerance and physical dependence, which is commonly known as opiate addiction. Morphine withdrawal syndrome is physiological and behavioral symptoms that stem from prolonged exposure to morphine. A majority of brain regions are hypofunctional over prolonged abstinence and acute morphine withdrawal. Furthermore, several neural mechanisms are likely to contribute to morphine withdrawal. The present review summarizes the literature pertaining to neural mechanisms underlying morphine withdrawal. Despite the fact that morphine withdrawal is a complex process, it is suggested that neural mechanisms play key roles in morphine withdrawal.

  6. Neural Mechanisms Underlying Risk and Ambiguity Attitudes.

    Science.gov (United States)

    Blankenstein, Neeltje E; Peper, Jiska S; Crone, Eveline A; van Duijvenvoorde, Anna C K

    2017-11-01

    Individual differences in attitudes to risk (a taste for risk, known probabilities) and ambiguity (a tolerance for uncertainty, unknown probabilities) differentially influence risky decision-making. However, it is not well understood whether risk and ambiguity are coded differently within individuals. Here, we tested whether individual differences in risk and ambiguity attitudes were reflected in distinct neural correlates during choice and outcome processing of risky and ambiguous gambles. To these ends, we developed a neuroimaging task in which participants ( n = 50) chose between a sure gain and a gamble, which was either risky or ambiguous, and presented decision outcomes (gains, no gains). From a separate task in which the amount, probability, and ambiguity level were varied, we estimated individuals' risk and ambiguity attitudes. Although there was pronounced neural overlap between risky and ambiguous gambling in a network typically related to decision-making under uncertainty, relatively more risk-seeking attitudes were associated with increased activation in valuation regions of the brain (medial and lateral OFC), whereas relatively more ambiguity-seeking attitudes were related to temporal cortex activation. In addition, although striatum activation was observed during reward processing irrespective of a prior risky or ambiguous gamble, reward processing after an ambiguous gamble resulted in enhanced dorsomedial PFC activation, possibly functioning as a general signal of uncertainty coding. These findings suggest that different neural mechanisms reflect individual differences in risk and ambiguity attitudes and that risk and ambiguity may impact overt risk-taking behavior in different ways.

  7. Automatic Classification of volcano-seismic events based on Deep Neural Networks.

    Science.gov (United States)

    Titos Luzón, M.; Bueno Rodriguez, A.; Garcia Martinez, L.; Benitez, C.; Ibáñez, J. M.

    2017-12-01

    Seismic monitoring of active volcanoes is a popular remote sensing technique to detect seismic activity, often associated to energy exchanges between the volcano and the environment. As a result, seismographs register a wide range of volcano-seismic signals that reflect the nature and underlying physics of volcanic processes. Machine learning and signal processing techniques provide an appropriate framework to analyze such data. In this research, we propose a new classification framework for seismic events based on deep neural networks. Deep neural networks are composed by multiple processing layers, and can discover intrinsic patterns from the data itself. Internal parameters can be initialized using a greedy unsupervised pre-training stage, leading to an efficient training of fully connected architectures. We aim to determine the robustness of these architectures as classifiers of seven different types of seismic events recorded at "Volcán de Fuego" (Colima, Mexico). Two deep neural networks with different pre-training strategies are studied: stacked denoising autoencoder and deep belief networks. Results are compared to existing machine learning algorithms (SVM, Random Forest, Multilayer Perceptron). We used 5 LPC coefficients over three non-overlapping segments as training features in order to characterize temporal evolution, avoid redundancy and encode the signal, regardless of its duration. Experimental results show that deep architectures can classify seismic events with higher accuracy than classical algorithms, attaining up to 92% recognition accuracy. Pre-training initialization helps these models to detect events that occur simultaneously in time (such explosions and rockfalls), increase robustness against noisy inputs, and provide better generalization. These results demonstrate deep neural networks are robust classifiers, and can be deployed in real-environments to monitor the seismicity of restless volcanoes.

  8. Analysis of surface ozone using a recurrent neural network.

    Science.gov (United States)

    Biancofiore, Fabio; Verdecchia, Marco; Di Carlo, Piero; Tomassetti, Barbara; Aruffo, Eleonora; Busilacchio, Marcella; Bianco, Sebastiano; Di Tommaso, Sinibaldo; Colangeli, Carlo

    2015-05-01

    Hourly concentrations of ozone (O₃) and nitrogen dioxide (NO₂) have been measured for 16 years, from 1998 to 2013, in a seaside town in central Italy. The seasonal trends of O₃ and NO₂ recorded in this period have been studied. Furthermore, we used the data collected during one year (2005), to define the characteristics of a multiple linear regression model and a neural network model. Both models are used to model the hourly O₃ concentration, using, two scenarios: 1) in the first as inputs, only meteorological parameters and 2) in the second adding photochemical parameters at those of the first scenario. In order to evaluate the performance of the model four statistical criteria are used: correlation coefficient, fractional bias, normalized mean squared error and a factor of two. All the criteria show that the neural network gives better results, compared to the regression model, in all the model scenarios. Predictions of O₃ have been carried out by many authors using a feed forward neural architecture. In this paper we show that a recurrent architecture significantly improves the performances of neural predictors. Using only the meteorological parameters as input, the recurrent architecture shows performance better than the multiple linear regression model that uses meteorological and photochemical data as input, making the neural network model with recurrent architecture a more useful tool in areas where only weather measurements are available. Finally, we used the neural network model to forecast the O₃ hourly concentrations 1, 3, 6, 12, 24 and 48 h ahead. The performances of the model in predicting O₃ levels are discussed. Emphasis is given to the possibility of using the neural network model in operational ways in areas where only meteorological data are available, in order to predict O₃ also in sites where it has not been measured yet. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Bioprinting for Neural Tissue Engineering.

    Science.gov (United States)

    Knowlton, Stephanie; Anand, Shivesh; Shah, Twisha; Tasoglu, Savas

    2018-01-01

    Bioprinting is a method by which a cell-encapsulating bioink is patterned to create complex tissue architectures. Given the potential impact of this technology on neural research, we review the current state-of-the-art approaches for bioprinting neural tissues. While 2D neural cultures are ubiquitous for studying neural cells, 3D cultures can more accurately replicate the microenvironment of neural tissues. By bioprinting neuronal constructs, one can precisely control the microenvironment by specifically formulating the bioink for neural tissues, and by spatially patterning cell types and scaffold properties in three dimensions. We review a range of bioprinted neural tissue models and discuss how they can be used to observe how neurons behave, understand disease processes, develop new therapies and, ultimately, design replacement tissues. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Optical resonators and neural networks

    Science.gov (United States)

    Anderson, Dana Z.

    1986-08-01

    It may be possible to implement neural network models using continuous field optical architectures. These devices offer the inherent parallelism of propagating waves and an information density in principle dictated by the wavelength of light and the quality of the bulk optical elements. Few components are needed to construct a relatively large equivalent network. Various associative memories based on optical resonators have been demonstrated in the literature, a ring resonator design is discussed in detail here. Information is stored in a holographic medium and recalled through a competitive processes in the gain medium supplying energy to the ring rsonator. The resonator memory is the first realized example of a neural network function implemented with this kind of architecture.

  11. Incorporation of Tenascin-C into the Extracellular Matrix by Periostin Underlies an Extracellular Meshwork Architecture*

    OpenAIRE

    Kii, Isao; Nishiyama, Takashi; Li, Minqi; Matsumoto, Ken-ichi; Saito, Mitsuru; Amizuka, Norio; Kudo, Akira

    2009-01-01

    Extracellular matrix (ECM) underlies a complicated multicellular architecture that is subjected to significant forces from mechanical environment. Although various components of the ECM have been enumerated, mechanisms that evolve the sophisticated ECM architecture remain to be addressed. Here we show that periostin, a matricellular protein, promotes incorporation of tenascin-C into the ECM and organizes a meshwork architecture of the ECM. We found that both periostin null mice and tenascin-C...

  12. Automatic disease diagnosis using optimised weightless neural networks for low-power wearable devices.

    Science.gov (United States)

    Cheruku, Ramalingaswamy; Edla, Damodar Reddy; Kuppili, Venkatanareshbabu; Dharavath, Ramesh; Beechu, Nareshkumar Reddy

    2017-08-01

    Low-power wearable devices for disease diagnosis are used at anytime and anywhere. These are non-invasive and pain-free for the better quality of life. However, these devices are resource constrained in terms of memory and processing capability. Memory constraint allows these devices to store a limited number of patterns and processing constraint provides delayed response. It is a challenging task to design a robust classification system under above constraints with high accuracy. In this Letter, to resolve this problem, a novel architecture for weightless neural networks (WNNs) has been proposed. It uses variable sized random access memories to optimise the memory usage and a modified binary TRIE data structure for reducing the test time. In addition, a bio-inspired-based genetic algorithm has been employed to improve the accuracy. The proposed architecture is experimented on various disease datasets using its software and hardware realisations. The experimental results prove that the proposed architecture achieves better performance in terms of accuracy, memory saving and test time as compared to standard WNNs. It also outperforms in terms of accuracy as compared to conventional neural network-based classifiers. The proposed architecture is a powerful part of most of the low-power wearable devices for the solution of memory, accuracy and time issues.

  13. Fuzzy neural network theory and application

    CERN Document Server

    Liu, Puyin

    2004-01-01

    This book systematically synthesizes research achievements in the field of fuzzy neural networks in recent years. It also provides a comprehensive presentation of the developments in fuzzy neural networks, with regard to theory as well as their application to system modeling and image restoration. Special emphasis is placed on the fundamental concepts and architecture analysis of fuzzy neural networks. The book is unique in treating all kinds of fuzzy neural networks and their learning algorithms and universal approximations, and employing simulation examples which are carefully designed to he

  14. Parallelization of Neural Network Training for NLP with Hogwild!

    Directory of Open Access Journals (Sweden)

    Deyringer Valentin

    2017-10-01

    Full Text Available Neural Networks are prevalent in todays NLP research. Despite their success for different tasks, training time is relatively long. We use Hogwild! to counteract this phenomenon and show that it is a suitable method to speed up training Neural Networks of different architectures and complexity. For POS tagging and translation we report considerable speedups of training, especially for the latter. We show that Hogwild! can be an important tool for training complex NLP architectures.

  15. Boolean Factor Analysis by Attractor Neural Network

    Czech Academy of Sciences Publication Activity Database

    Frolov, A. A.; Húsek, Dušan; Muraviev, I. P.; Polyakov, P.Y.

    2007-01-01

    Roč. 18, č. 3 (2007), s. 698-707 ISSN 1045-9227 R&D Projects: GA AV ČR 1ET100300419; GA ČR GA201/05/0079 Institutional research plan: CEZ:AV0Z10300504 Keywords : recurrent neural network * Hopfield-like neural network * associative memory * unsupervised learning * neural network architecture * neural network application * statistics * Boolean factor analysis * dimensionality reduction * features clustering * concepts search * information retrieval Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 2.769, year: 2007

  16. Neural mechanisms underlying melodic perception and memory for pitch.

    Science.gov (United States)

    Zatorre, R J; Evans, A C; Meyer, E

    1994-04-01

    The neural correlates of music perception were studied by measuring cerebral blood flow (CBF) changes with positron emission tomography (PET). Twelve volunteers were scanned using the bolus water method under four separate conditions: (1) listening to a sequence of noise bursts, (2) listening to unfamiliar tonal melodies, (3) comparing the pitch of the first two notes of the same set of melodies, and (4) comparing the pitch of the first and last notes of the melodies. The latter two conditions were designed to investigate short-term pitch retention under low or high memory load, respectively. Subtraction of the obtained PET images, superimposed on matched MRI scans, provides anatomical localization of CBF changes associated with specific cognitive functions. Listening to melodies, relative to acoustically matched noise sequences, resulted in CBF increases in the right superior temporal and right occipital cortices. Pitch judgments of the first two notes of each melody, relative to passive listening to the same stimuli, resulted in right frontal-lobe activation. Analysis of the high memory load condition relative to passive listening revealed the participation of a number of cortical and subcortical regions, notably in the right frontal and right temporal lobes, as well as in parietal and insular cortex. Both pitch judgment conditions also revealed CBF decreases within the left primary auditory cortex. We conclude that specialized neural systems in the right superior temporal cortex participate in perceptual analysis of melodies; pitch comparisons are effected via a neural network that includes right prefrontal cortex, but active retention of pitch involves the interaction of right temporal and frontal cortices.

  17. Neural Network Ensembles

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Salamon, Peter

    1990-01-01

    We propose several means for improving the performance an training of neural networks for classification. We use crossvalidation as a tool for optimizing network parameters and architecture. We show further that the remaining generalization error can be reduced by invoking ensembles of similar...... networks....

  18. Elements of neurogeometry functional architectures of vision

    CERN Document Server

    Petitot, Jean

    2017-01-01

    This book describes several mathematical models of the primary visual cortex, referring them to a vast ensemble of experimental data and putting forward an original geometrical model for its functional architecture, that is, the highly specific organization of its neural connections. The book spells out the geometrical algorithms implemented by this functional architecture, or put another way, the “neurogeometry” immanent in visual perception. Focusing on the neural origins of our spatial representations, it demonstrates three things: firstly, the way the visual neurons filter the optical signal is closely related to a wavelet analysis; secondly, the contact structure of the 1-jets of the curves in the plane (the retinal plane here) is implemented by the cortical functional architecture; and lastly, the visual algorithms for integrating contours from what may be rather incomplete sensory data can be modelled by the sub-Riemannian geometry associated with this contact structure. As such, it provides rea...

  19. The gamma model : a new neural network for temporal processing

    NARCIS (Netherlands)

    Vries, de B.

    1992-01-01

    In this paper we develop the gamma neural model, a new neural net architecture for processing of temporal patterns. Time varying patterns are normally segmented into a sequence of static patterns that are successively presented to a neural net. In the approach presented here segmentation is avoided.

  20. Distorted Character Recognition Via An Associative Neural Network

    Science.gov (United States)

    Messner, Richard A.; Szu, Harold H.

    1987-03-01

    The purpose of this paper is two-fold. First, it is intended to provide some preliminary results of a character recognition scheme which has foundations in on-going neural network architecture modeling, and secondly, to apply some of the neural network results in a real application area where thirty years of effort has had little effect on providing the machine an ability to recognize distorted objects within the same object class. It is the author's belief that the time is ripe to start applying in ernest the results of over twenty years of effort in neural modeling to some of the more difficult problems which seem so hard to solve by conventional means. The character recognition scheme proposed utilizes a preprocessing stage which performs a 2-dimensional Walsh transform of an input cartesian image field, then sequency filters this spectrum into three feature bands. Various features are then extracted and organized into three sets of feature vectors. These vector patterns that are stored and recalled associatively. Two possible associative neural memory models are proposed for further investigation. The first being an outer-product linear matrix associative memory with a threshold function controlling the strength of the output pattern (similar to Kohonen's crosscorrelation approach [1]). The second approach is based upon a modified version of Grossberg's neural architecture [2] which provides better self-organizing properties due to its adaptive nature. Preliminary results of the sequency filtering and feature extraction preprocessing stage and discussion about the use of the proposed neural architectures is included.

  1. Rotation Invariance Neural Network

    OpenAIRE

    Li, Shiyuan

    2017-01-01

    Rotation invariance and translation invariance have great values in image recognition tasks. In this paper, we bring a new architecture in convolutional neural network (CNN) named cyclic convolutional layer to achieve rotation invariance in 2-D symbol recognition. We can also get the position and orientation of the 2-D symbol by the network to achieve detection purpose for multiple non-overlap target. Last but not least, this architecture can achieve one-shot learning in some cases using thos...

  2. Cosmic-ray discrimination capabilities of DELTA E-E silicon nuclear telescopes using neural networks

    CERN Document Server

    Ambriola, M; Cafagna, F; Castellano, M; Ciacio, F; Circella, M; De Marzo, C N; Montaruli, T

    2000-01-01

    An isotope classifier of cosmic-ray events collected by space detectors has been implemented using a multi-layer perceptron neural architecture. In order to handle a great number of different isotopes a modular architecture of the 'mixture of experts' type is proposed. The performance of this classifier has been tested on simulated data and has been compared with a 'classical' classifying procedure. The quantitative comparison with traditional techniques shows that the neural approach has classification performances comparable - within 1% - with that of the classical one, with efficiency of the order of 98%. A possible hardware implementation of such a kind of neural architecture in future space missions is considered.

  3. Neural Monkey: An Open-source Tool for Sequence Learning

    Directory of Open Access Journals (Sweden)

    Helcl Jindřich

    2017-04-01

    Full Text Available In this paper, we announce the development of Neural Monkey – an open-source neural machine translation (NMT and general sequence-to-sequence learning system built over the TensorFlow machine learning library. The system provides a high-level API tailored for fast prototyping of complex architectures with multiple sequence encoders and decoders. Models’ overall architecture is specified in easy-to-read configuration files. The long-term goal of the Neural Monkey project is to create and maintain a growing collection of implementations of recently proposed components or methods, and therefore it is designed to be easily extensible. Trained models can be deployed either for batch data processing or as a web service. In the presented paper, we describe the design of the system and introduce the reader to running experiments using Neural Monkey.

  4. Load forecasting using different architectures of neural networks with the assistance of the MATLAB toolboxes; Previsao de cargas eletricas utilizando diferentes arquiteturas de redes neurais artificiais com o auxilio das toolboxes do MATLAB

    Energy Technology Data Exchange (ETDEWEB)

    Nose Filho, Kenji; Araujo, Klayton A.M.; Maeda, Jorge L.Y.; Lotufo, Anna Diva P. [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Ilha Solteira, SP (Brazil)], Emails: kenjinose@yahoo.com.br, klayton_ama@hotmail.com, jorge-maeda@hotmail.com, annadiva@dee.feis.unesp.br

    2009-07-01

    This paper presents a development and implementation of a program to electrical load forecasting with data from a Brazilian electrical company, using four different architectures of neural networks of the MATLAB toolboxes: multilayer backpropagation gradient descendent with momentum, multilayer backpropagation Levenberg-Marquardt, adaptive network based fuzzy inference system and general regression neural network. The program presented a satisfactory performance, guaranteeing very good results. (author)

  5. Adaptive Regularization of Neural Classifiers

    DEFF Research Database (Denmark)

    Andersen, Lars Nonboe; Larsen, Jan; Hansen, Lars Kai

    1997-01-01

    We present a regularization scheme which iteratively adapts the regularization parameters by minimizing the validation error. It is suggested to use the adaptive regularization scheme in conjunction with optimal brain damage pruning to optimize the architecture and to avoid overfitting. Furthermo......, we propose an improved neural classification architecture eliminating an inherent redundancy in the widely used SoftMax classification network. Numerical results demonstrate the viability of the method...

  6. Recurrent Convolutional Neural Networks: A Better Model of Biological Object Recognition.

    Science.gov (United States)

    Spoerer, Courtney J; McClure, Patrick; Kriegeskorte, Nikolaus

    2017-01-01

    Feedforward neural networks provide the dominant model of how the brain performs visual object recognition. However, these networks lack the lateral and feedback connections, and the resulting recurrent neuronal dynamics, of the ventral visual pathway in the human and non-human primate brain. Here we investigate recurrent convolutional neural networks with bottom-up (B), lateral (L), and top-down (T) connections. Combining these types of connections yields four architectures (B, BT, BL, and BLT), which we systematically test and compare. We hypothesized that recurrent dynamics might improve recognition performance in the challenging scenario of partial occlusion. We introduce two novel occluded object recognition tasks to test the efficacy of the models, digit clutter (where multiple target digits occlude one another) and digit debris (where target digits are occluded by digit fragments). We find that recurrent neural networks outperform feedforward control models (approximately matched in parametric complexity) at recognizing objects, both in the absence of occlusion and in all occlusion conditions. Recurrent networks were also found to be more robust to the inclusion of additive Gaussian noise. Recurrent neural networks are better in two respects: (1) they are more neurobiologically realistic than their feedforward counterparts; (2) they are better in terms of their ability to recognize objects, especially under challenging conditions. This work shows that computer vision can benefit from using recurrent convolutional architectures and suggests that the ubiquitous recurrent connections in biological brains are essential for task performance.

  7. Urban landscape architecture design under the view of sustainable development

    Science.gov (United States)

    Chen, WeiLin

    2017-08-01

    The concept of sustainable development in modern city landscape design advocates landscape architecture, which is the main development direction in the field of landscape design. They are also effective measures to promote the sustainable development of city garden. Based on this, combined with the connotation of sustainable development and sustainable design, this paper analyzes and discusses the design of urban landscape under the concept of sustainable development.

  8. SYNAPTIC DEPRESSION IN DEEP NEURAL NETWORKS FOR SPEECH PROCESSING.

    Science.gov (United States)

    Zhang, Wenhao; Li, Hanyu; Yang, Minda; Mesgarani, Nima

    2016-03-01

    A characteristic property of biological neurons is their ability to dynamically change the synaptic efficacy in response to variable input conditions. This mechanism, known as synaptic depression, significantly contributes to the formation of normalized representation of speech features. Synaptic depression also contributes to the robust performance of biological systems. In this paper, we describe how synaptic depression can be modeled and incorporated into deep neural network architectures to improve their generalization ability. We observed that when synaptic depression is added to the hidden layers of a neural network, it reduces the effect of changing background activity in the node activations. In addition, we show that when synaptic depression is included in a deep neural network trained for phoneme classification, the performance of the network improves under noisy conditions not included in the training phase. Our results suggest that more complete neuron models may further reduce the gap between the biological performance and artificial computing, resulting in networks that better generalize to novel signal conditions.

  9. Information Extraction with Character-level Neural Networks and Free Noisy Supervision

    OpenAIRE

    Meerkamp, Philipp; Zhou, Zhengyi

    2016-01-01

    We present an architecture for information extraction from text that augments an existing parser with a character-level neural network. The network is trained using a measure of consistency of extracted data with existing databases as a form of noisy supervision. Our architecture combines the ability of constraint-based information extraction systems to easily incorporate domain knowledge and constraints with the ability of deep neural networks to leverage large amounts of data to learn compl...

  10. ReSeg: A Recurrent Neural Network-Based Model for Semantic Segmentation

    OpenAIRE

    Visin, Francesco; Ciccone, Marco; Romero, Adriana; Kastner, Kyle; Cho, Kyunghyun; Bengio, Yoshua; Matteucci, Matteo; Courville, Aaron

    2015-01-01

    We propose a structured prediction architecture, which exploits the local generic features extracted by Convolutional Neural Networks and the capacity of Recurrent Neural Networks (RNN) to retrieve distant dependencies. The proposed architecture, called ReSeg, is based on the recently introduced ReNet model for image classification. We modify and extend it to perform the more challenging task of semantic segmentation. Each ReNet layer is composed of four RNN that sweep the image horizontally ...

  11. assessment of neural networks performance in modeling rainfall ...

    African Journals Online (AJOL)

    Sholagberu

    neural network architecture for precipitation prediction of Myanmar, World Academy of. Science, Engineering and Technology, 48, pp. 130 – 134. Kumarasiri, A.D. and Sonnadara, D.U.J. (2006). Rainfall forecasting: an artificial neural network approach, Proceedings of the Technical Sessions,. 22, pp. 1-13 Institute of Physics ...

  12. Program Helps Simulate Neural Networks

    Science.gov (United States)

    Villarreal, James; Mcintire, Gary

    1993-01-01

    Neural Network Environment on Transputer System (NNETS) computer program provides users high degree of flexibility in creating and manipulating wide variety of neural-network topologies at processing speeds not found in conventional computing environments. Supports back-propagation and back-propagation-related algorithms. Back-propagation algorithm used is implementation of Rumelhart's generalized delta rule. NNETS developed on INMOS Transputer(R). Predefines back-propagation network, Jordan network, and reinforcement network to assist users in learning and defining own networks. Also enables users to configure other neural-network paradigms from NNETS basic architecture. Small portion of software written in OCCAM(R) language.

  13. A Case Study on Neural Inspired Dynamic Memory Management Strategies for High Performance Computing.

    Energy Technology Data Exchange (ETDEWEB)

    Vineyard, Craig Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Verzi, Stephen Joseph [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-09-01

    As high performance computing architectures pursue more computational power there is a need for increased memory capacity and bandwidth as well. A multi-level memory (MLM) architecture addresses this need by combining multiple memory types with different characteristics as varying levels of the same architecture. How to efficiently utilize this memory infrastructure is an unknown challenge, and in this research we sought to investigate whether neural inspired approaches can meaningfully help with memory management. In particular we explored neurogenesis inspired re- source allocation, and were able to show a neural inspired mixed controller policy can beneficially impact how MLM architectures utilize memory.

  14. Development switch in neural circuitry underlying odor-malaise learning.

    Science.gov (United States)

    Shionoya, Kiseko; Moriceau, Stephanie; Lunday, Lauren; Miner, Cathrine; Roth, Tania L; Sullivan, Regina M

    2006-01-01

    Fetal and infant rats can learn to avoid odors paired with illness before development of brain areas supporting this learning in adults, suggesting an alternate learning circuit. Here we begin to document the transition from the infant to adult neural circuit underlying odor-malaise avoidance learning using LiCl (0.3 M; 1% of body weight, ip) and a 30-min peppermint-odor exposure. Conditioning groups included: Paired odor-LiCl, Paired odor-LiCl-Nursing, LiCl, and odor-saline. Results showed that Paired LiCl-odor conditioning induced a learned odor aversion in postnatal day (PN) 7, 12, and 23 pups. Odor-LiCl Paired Nursing induced a learned odor preference in PN7 and PN12 pups but blocked learning in PN23 pups. 14C 2-deoxyglucose (2-DG) autoradiography indicated enhanced olfactory bulb activity in PN7 and PN12 pups with odor preference and avoidance learning. The odor aversion in weanling aged (PN23) pups resulted in enhanced amygdala activity in Paired odor-LiCl pups, but not if they were nursing. Thus, the neural circuit supporting malaise-induced aversions changes over development, indicating that similar infant and adult-learned behaviors may have distinct neural circuits.

  15. Brain inspired hardware architectures - Can they be used for particle physics ?

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    After their inception in the 1940s and several decades of moderate success, artificial neural networks have recently demonstrated impressive achievements in analysing big data volumes. Wide and deep network architectures can now be trained using high performance computing systems, graphics card clusters in particular. Despite their successes these state-of-the-art approaches suffer from very long training times and huge energy consumption, in particular during the training phase. The biological brain can perform similar and superior classification tasks in the space and time domains, but at the same time exhibits very low power consumption, rapid unsupervised learning capabilities and fault tolerance. In the talk the differences between classical neural networks and neural circuits in the brain will be presented. Recent hardware implementations of neuromorphic computing systems and their applications will be shown. Finally, some initial ideas to use accelerated neural architectures as trigger processors i...

  16. Direct Adaptive Aircraft Control Using Dynamic Cell Structure Neural Networks

    Science.gov (United States)

    Jorgensen, Charles C.

    1997-01-01

    A Dynamic Cell Structure (DCS) Neural Network was developed which learns topology representing networks (TRNS) of F-15 aircraft aerodynamic stability and control derivatives. The network is integrated into a direct adaptive tracking controller. The combination produces a robust adaptive architecture capable of handling multiple accident and off- nominal flight scenarios. This paper describes the DCS network and modifications to the parameter estimation procedure. The work represents one step towards an integrated real-time reconfiguration control architecture for rapid prototyping of new aircraft designs. Performance was evaluated using three off-line benchmarks and on-line nonlinear Virtual Reality simulation. Flight control was evaluated under scenarios including differential stabilator lock, soft sensor failure, control and stability derivative variations, and air turbulence.

  17. NEURAL NETWORK SYSTEM FOR DIAGNOSTICS OF AVIATION DESIGNATION PRODUCTS

    Directory of Open Access Journals (Sweden)

    В. Єременко

    2011-02-01

    Full Text Available In the article for solving the classification problem of the technical state of the  object, proposed to use a hybrid neural network with a Kohonen layer and multilayer perceptron. The information-measuring system can be used for standardless diagnostics, cluster analysis and to classify the products which made from composite materials. The advantage of this architecture is flexibility, high performance, ability to use different methods for collecting diagnostic information about unit under test, high reliability of information processing

  18. Linking neural and symbolic representation and processing of conceptual structures

    NARCIS (Netherlands)

    van der Velde, Frank; Forth, Jamie; Nazareth, Deniece S.; Wiggins, Geraint A.

    2017-01-01

    We compare and discuss representations in two cognitive architectures aimed at representing and processing complex conceptual (sentence-like) structures. First is the Neural Blackboard Architecture (NBA), which aims to account for representation and processing of complex and combinatorial conceptual

  19. A mixed-scale dense convolutional neural network for image analysis

    NARCIS (Netherlands)

    D.M. Pelt (Daniël); J.A. Sethian (James)

    2016-01-01

    textabstractDeep convolutional neural networks have been successfully applied to many image-processing problems in recent works. Popular network architectures often add additional operations and connections to the standard architecture to enable training deeper networks. To achieve accurate results

  20. Neural mechanisms underlying cognitive control of men with lifelong antisocial behavior.

    Science.gov (United States)

    Schiffer, Boris; Pawliczek, Christina; Mu Ller, Bernhard; Forsting, Michael; Gizewski, Elke; Leygraf, Norbert; Hodgins, Sheilagh

    2014-04-30

    Results of meta-analyses suggested subtle deficits in cognitive control among antisocial individuals. Because almost all studies focused on children with conduct problems or adult psychopaths, however, little is known about cognitive control mechanisms among the majority of persistent violent offenders who present an antisocial personality disorder (ASPD). The present study aimed to determine whether offenders with ASPD, relative to non-offenders, display dysfunction in the neural mechanisms underlying cognitive control and to assess the extent to which these dysfunctions are associated with psychopathic traits and trait impulsivity. Participants comprised 21 violent offenders and 23 non-offenders who underwent event-related functional magnetic resonance imaging while performing a non-verbal Stroop task. The offenders, relative to the non-offenders, exhibited reduced response time interference and a different pattern of conflict- and error-related activity in brain areas involved in cognitive control, attention, language, and emotion processing, that is, the anterior cingulate, dorsolateral prefrontal, superior temporal and postcentral cortices, putamen, thalamus, and amygdala. Moreover, between-group differences in behavioural and neural responses revealed associations with core features of psychopathy and attentional impulsivity. Thus, the results of the present study confirmed the hypothesis that offenders with ASPD display alterations in the neural mechanisms underlying cognitive control and that those alterations relate, at least in part, to personality characteristics. Copyright © 2014. Published by Elsevier Ireland Ltd.

  1. Prediction based chaos control via a new neural network

    International Nuclear Information System (INIS)

    Shen Liqun; Wang Mao; Liu Wanyu; Sun Guanghui

    2008-01-01

    In this Letter, a new chaos control scheme based on chaos prediction is proposed. To perform chaos prediction, a new neural network architecture for complex nonlinear approximation is proposed. And the difficulty in building and training the neural network is also reduced. Simulation results of Logistic map and Lorenz system show the effectiveness of the proposed chaos control scheme and the proposed neural network

  2. The plasma automata network (PAN) architecture

    International Nuclear Information System (INIS)

    Cameron-Carey, C.M.

    1991-01-01

    Conventional neural networks consist of processing elements which are interconnected according to a specified topology. Typically, the number of processing elements and the interconnection topology are fixed. A neural network's information processing capability lies mainly in the variability of interconnection strengths, which directly influence activation patterns; these patterns represent entities and their interrelationships. Contrast this architecture, with its fixed topology and variable interconnection strengths, against one having dynamic topology and fixed connection strength. This paper reports on this proposed architecture in which there are no connections between processing elements. Instead, the processing elements form a plasma, exchanging information upon collision. A plasma can be populated with several different types of processing elements, each with their won activation function and self-modification mechanism. The activation patterns that are the plasma;s response to stimulation drive natural selection among processing elements which evolve to optimize performance

  3. Residual Deep Convolutional Neural Network Predicts MGMT Methylation Status.

    Science.gov (United States)

    Korfiatis, Panagiotis; Kline, Timothy L; Lachance, Daniel H; Parney, Ian F; Buckner, Jan C; Erickson, Bradley J

    2017-10-01

    Predicting methylation of the O6-methylguanine methyltransferase (MGMT) gene status utilizing MRI imaging is of high importance since it is a predictor of response and prognosis in brain tumors. In this study, we compare three different residual deep neural network (ResNet) architectures to evaluate their ability in predicting MGMT methylation status without the need for a distinct tumor segmentation step. We found that the ResNet50 (50 layers) architecture was the best performing model, achieving an accuracy of 94.90% (+/- 3.92%) for the test set (classification of a slice as no tumor, methylated MGMT, or non-methylated). ResNet34 (34 layers) achieved 80.72% (+/- 13.61%) while ResNet18 (18 layers) accuracy was 76.75% (+/- 20.67%). ResNet50 performance was statistically significantly better than both ResNet18 and ResNet34 architectures (p deep neural architectures can be used to predict molecular biomarkers from routine medical images.

  4. Shakeout: A New Approach to Regularized Deep Neural Network Training.

    Science.gov (United States)

    Kang, Guoliang; Li, Jun; Tao, Dacheng

    2018-05-01

    Recent years have witnessed the success of deep neural networks in dealing with a plenty of practical problems. Dropout has played an essential role in many successful deep neural networks, by inducing regularization in the model training. In this paper, we present a new regularized training approach: Shakeout. Instead of randomly discarding units as Dropout does at the training stage, Shakeout randomly chooses to enhance or reverse each unit's contribution to the next layer. This minor modification of Dropout has the statistical trait: the regularizer induced by Shakeout adaptively combines , and regularization terms. Our classification experiments with representative deep architectures on image datasets MNIST, CIFAR-10 and ImageNet show that Shakeout deals with over-fitting effectively and outperforms Dropout. We empirically demonstrate that Shakeout leads to sparser weights under both unsupervised and supervised settings. Shakeout also leads to the grouping effect of the input units in a layer. Considering the weights in reflecting the importance of connections, Shakeout is superior to Dropout, which is valuable for the deep model compression. Moreover, we demonstrate that Shakeout can effectively reduce the instability of the training process of the deep architecture.

  5. Age-related neural correlates of cognitive task performance under increased postural load

    NARCIS (Netherlands)

    Van Impe, A; Bruijn, S M; Coxon, J P; Wenderoth, N; Sunaert, S; Duysens, J; Swinnen, S P

    2013-01-01

    Behavioral studies suggest that postural control requires increased cognitive control and visuospatial processing with aging. Consequently, performance can decline when concurrently performing a postural and a demanding cognitive task. We aimed to identify the neural substrate underlying this

  6. A Novel Robot System Integrating Biological and Mechanical Intelligence Based on Dissociated Neural Network-Controlled Closed-Loop Environment.

    Science.gov (United States)

    Li, Yongcheng; Sun, Rong; Wang, Yuechao; Li, Hongyi; Zheng, Xiongfei

    2016-01-01

    We propose the architecture of a novel robot system merging biological and artificial intelligence based on a neural controller connected to an external agent. We initially built a framework that connected the dissociated neural network to a mobile robot system to implement a realistic vehicle. The mobile robot system characterized by a camera and two-wheeled robot was designed to execute the target-searching task. We modified a software architecture and developed a home-made stimulation generator to build a bi-directional connection between the biological and the artificial components via simple binomial coding/decoding schemes. In this paper, we utilized a specific hierarchical dissociated neural network for the first time as the neural controller. Based on our work, neural cultures were successfully employed to control an artificial agent resulting in high performance. Surprisingly, under the tetanus stimulus training, the robot performed better and better with the increasement of training cycle because of the short-term plasticity of neural network (a kind of reinforced learning). Comparing to the work previously reported, we adopted an effective experimental proposal (i.e. increasing the training cycle) to make sure of the occurrence of the short-term plasticity, and preliminarily demonstrated that the improvement of the robot's performance could be caused independently by the plasticity development of dissociated neural network. This new framework may provide some possible solutions for the learning abilities of intelligent robots by the engineering application of the plasticity processing of neural networks, also for the development of theoretical inspiration for the next generation neuro-prostheses on the basis of the bi-directional exchange of information within the hierarchical neural networks.

  7. A Novel Robot System Integrating Biological and Mechanical Intelligence Based on Dissociated Neural Network-Controlled Closed-Loop Environment.

    Directory of Open Access Journals (Sweden)

    Yongcheng Li

    Full Text Available We propose the architecture of a novel robot system merging biological and artificial intelligence based on a neural controller connected to an external agent. We initially built a framework that connected the dissociated neural network to a mobile robot system to implement a realistic vehicle. The mobile robot system characterized by a camera and two-wheeled robot was designed to execute the target-searching task. We modified a software architecture and developed a home-made stimulation generator to build a bi-directional connection between the biological and the artificial components via simple binomial coding/decoding schemes. In this paper, we utilized a specific hierarchical dissociated neural network for the first time as the neural controller. Based on our work, neural cultures were successfully employed to control an artificial agent resulting in high performance. Surprisingly, under the tetanus stimulus training, the robot performed better and better with the increasement of training cycle because of the short-term plasticity of neural network (a kind of reinforced learning. Comparing to the work previously reported, we adopted an effective experimental proposal (i.e. increasing the training cycle to make sure of the occurrence of the short-term plasticity, and preliminarily demonstrated that the improvement of the robot's performance could be caused independently by the plasticity development of dissociated neural network. This new framework may provide some possible solutions for the learning abilities of intelligent robots by the engineering application of the plasticity processing of neural networks, also for the development of theoretical inspiration for the next generation neuro-prostheses on the basis of the bi-directional exchange of information within the hierarchical neural networks.

  8. Neural mechanisms underlying sensitivity to reverse-phi motion in the fly.

    Science.gov (United States)

    Leonhardt, Aljoscha; Meier, Matthias; Serbe, Etienne; Eichner, Hubert; Borst, Alexander

    2017-01-01

    Optical illusions provide powerful tools for mapping the algorithms and circuits that underlie visual processing, revealing structure through atypical function. Of particular note in the study of motion detection has been the reverse-phi illusion. When contrast reversals accompany discrete movement, detected direction tends to invert. This occurs across a wide range of organisms, spanning humans and invertebrates. Here, we map an algorithmic account of the phenomenon onto neural circuitry in the fruit fly Drosophila melanogaster. Through targeted silencing experiments in tethered walking flies as well as electrophysiology and calcium imaging, we demonstrate that ON- or OFF-selective local motion detector cells T4 and T5 are sensitive to certain interactions between ON and OFF. A biologically plausible detector model accounts for subtle features of this particular form of illusory motion reversal, like the re-inversion of turning responses occurring at extreme stimulus velocities. In light of comparable circuit architecture in the mammalian retina, we suggest that similar mechanisms may apply even to human psychophysics.

  9. Anomaly detection in an automated safeguards system using neural networks

    International Nuclear Information System (INIS)

    Whiteson, R.; Howell, J.A.

    1992-01-01

    An automated safeguards system must be able to detect an anomalous event, identify the nature of the event, and recommend a corrective action. Neural networks represent a new way of thinking about basic computational mechanisms for intelligent information processing. In this paper, we discuss the issues involved in applying a neural network model to the first step of this process: anomaly detection in materials accounting systems. We extend our previous model to a 3-tank problem and compare different neural network architectures and algorithms. We evaluate the computational difficulties in training neural networks and explore how certain design principles affect the problems. The issues involved in building a neural network architecture include how the information flows, how the network is trained, how the neurons in a network are connected, how the neurons process information, and how the connections between neurons are modified. Our approach is based on the demonstrated ability of neural networks to model complex, nonlinear, real-time processes. By modeling the normal behavior of the processes, we can predict how a system should be behaving and, therefore, detect when an abnormality occurs

  10. Memristor-based neural networks

    International Nuclear Information System (INIS)

    Thomas, Andy

    2013-01-01

    The synapse is a crucial element in biological neural networks, but a simple electronic equivalent has been absent. This complicates the development of hardware that imitates biological architectures in the nervous system. Now, the recent progress in the experimental realization of memristive devices has renewed interest in artificial neural networks. The resistance of a memristive system depends on its past states and exactly this functionality can be used to mimic the synaptic connections in a (human) brain. After a short introduction to memristors, we present and explain the relevant mechanisms in a biological neural network, such as long-term potentiation and spike time-dependent plasticity, and determine the minimal requirements for an artificial neural network. We review the implementations of these processes using basic electric circuits and more complex mechanisms that either imitate biological systems or could act as a model system for them. (topical review)

  11. Parallel protein secondary structure prediction based on neural networks.

    Science.gov (United States)

    Zhong, Wei; Altun, Gulsah; Tian, Xinmin; Harrison, Robert; Tai, Phang C; Pan, Yi

    2004-01-01

    Protein secondary structure prediction has a fundamental influence on today's bioinformatics research. In this work, binary and tertiary classifiers of protein secondary structure prediction are implemented on Denoeux belief neural network (DBNN) architecture. Hydrophobicity matrix, orthogonal matrix, BLOSUM62 and PSSM (position specific scoring matrix) are experimented separately as the encoding schemes for DBNN. The experimental results contribute to the design of new encoding schemes. New binary classifier for Helix versus not Helix ( approximately H) for DBNN produces prediction accuracy of 87% when PSSM is used for the input profile. The performance of DBNN binary classifier is comparable to other best prediction methods. The good test results for binary classifiers open a new approach for protein structure prediction with neural networks. Due to the time consuming task of training the neural networks, Pthread and OpenMP are employed to parallelize DBNN in the hyperthreading enabled Intel architecture. Speedup for 16 Pthreads is 4.9 and speedup for 16 OpenMP threads is 4 in the 4 processors shared memory architecture. Both speedup performance of OpenMP and Pthread is superior to that of other research. With the new parallel training algorithm, thousands of amino acids can be processed in reasonable amount of time. Our research also shows that hyperthreading technology for Intel architecture is efficient for parallel biological algorithms.

  12. Comparison of four Adaboost algorithm based artificial neural networks in wind speed predictions

    International Nuclear Information System (INIS)

    Liu, Hui; Tian, Hong-qi; Li, Yan-fei; Zhang, Lei

    2015-01-01

    Highlights: • Four hybrid algorithms are proposed for the wind speed decomposition. • Adaboost algorithm is adopted to provide a hybrid training framework. • MLP neural networks are built to do the forecasting computation. • Four important network training algorithms are included in the MLP networks. • All the proposed hybrid algorithms are suitable for the wind speed predictions. - Abstract: The technology of wind speed prediction is important to guarantee the safety of wind power utilization. In this paper, four different hybrid methods are proposed for the high-precision multi-step wind speed predictions based on the Adaboost (Adaptive Boosting) algorithm and the MLP (Multilayer Perceptron) neural networks. In the hybrid Adaboost–MLP forecasting architecture, four important algorithms are adopted for the training and modeling of the MLP neural networks, including GD-ALR-BP algorithm, GDM-ALR-BP algorithm, CG-BP-FR algorithm and BFGS algorithm. The aim of the study is to investigate the promoted forecasting percentages of the MLP neural networks by the Adaboost algorithm’ optimization under various training algorithms. The hybrid models in the performance comparison include Adaboost–GD-ALR-BP–MLP, Adaboost–GDM-ALR-BP–MLP, Adaboost–CG-BP-FR–MLP, Adaboost–BFGS–MLP, GD-ALR-BP–MLP, GDM-ALR-BP–MLP, CG-BP-FR–MLP and BFGS–MLP. Two experimental results show that: (1) the proposed hybrid Adaboost–MLP forecasting architecture is effective for the wind speed predictions; (2) the Adaboost algorithm has promoted the forecasting performance of the MLP neural networks considerably; (3) among the proposed Adaboost–MLP forecasting models, the Adaboost–CG-BP-FR–MLP model has the best performance; and (4) the improved percentages of the MLP neural networks by the Adaboost algorithm decrease step by step with the following sequence of training algorithms as: GD-ALR-BP, GDM-ALR-BP, CG-BP-FR and BFGS

  13. Microscale architecture in biomaterial scaffolds for spatial control of neural cell behavior

    Science.gov (United States)

    Meco, Edi; Lampe, Kyle J.

    2018-02-01

    Biomaterial scaffolds mimic aspects of the native central nervous system (CNS) extracellular matrix (ECM) and have been extensively utilized to influence neural cell (NC) behavior in in vitro and in vivo settings. These biomimetic scaffolds support NC cultures, can direct the differentiation of NCs, and have recapitulated some native NC behavior in an in vitro setting. However, NC transplant therapies and treatments used in animal models of CNS disease and injury have not fully restored functionality. The observed lack of functional recovery occurs despite improvements in transplanted NC viability when incorporating biomaterial scaffolds and the potential of NC to replace damaged native cells. The behavior of NCs within biomaterial scaffolds must be directed in order to improve the efficacy of transplant therapies and treatments. Biomaterial scaffold topography and imbedded bioactive cues, designed at the microscale level, can alter NC phenotype, direct migration, and differentiation. Microscale patterning in biomaterial scaffolds for spatial control of NC behavior has enhanced the capabilities of in vitro models to capture properties of the native CNS tissue ECM. Patterning techniques such as lithography, electrospinning and 3D bioprinting can be employed to design the microscale architecture of biomaterial scaffolds. Here, the progress and challenges of the prevalent biomaterial patterning techniques of lithography, electrospinning, and 3D bioprinting are reported. This review analyzes NC behavioral response to specific microscale topographical patterns and spatially organized bioactive cues.

  14. Biological neural networks as model systems for designing future parallel processing computers

    Science.gov (United States)

    Ross, Muriel D.

    1991-01-01

    One of the more interesting debates of the present day centers on whether human intelligence can be simulated by computer. The author works under the premise that neurons individually are not smart at all. Rather, they are physical units which are impinged upon continuously by other matter that influences the direction of voltage shifts across the units membranes. It is only the action of a great many neurons, billions in the case of the human nervous system, that intelligent behavior emerges. What is required to understand even the simplest neural system is painstaking analysis, bit by bit, of the architecture and the physiological functioning of its various parts. The biological neural network studied, the vestibular utricular and saccular maculas of the inner ear, are among the most simple of the mammalian neural networks to understand and model. While there is still a long way to go to understand even this most simple neural network in sufficient detail for extrapolation to computers and robots, a start was made. Moreover, the insights obtained and the technologies developed help advance the understanding of the more complex neural networks that underlie human intelligence.

  15. Incorporation of tenascin-C into the extracellular matrix by periostin underlies an extracellular meshwork architecture.

    Science.gov (United States)

    Kii, Isao; Nishiyama, Takashi; Li, Minqi; Matsumoto, Ken-Ichi; Saito, Mitsuru; Amizuka, Norio; Kudo, Akira

    2010-01-15

    Extracellular matrix (ECM) underlies a complicated multicellular architecture that is subjected to significant forces from mechanical environment. Although various components of the ECM have been enumerated, mechanisms that evolve the sophisticated ECM architecture remain to be addressed. Here we show that periostin, a matricellular protein, promotes incorporation of tenascin-C into the ECM and organizes a meshwork architecture of the ECM. We found that both periostin null mice and tenascin-C null mice exhibited a similar phenotype, confined tibial periostitis, which possibly corresponds to medial tibial stress syndrome in human sports injuries. Periostin possessed adjacent domains that bind to tenascin-C and the other ECM protein: fibronectin and type I collagen, respectively. These adjacent domains functioned as a bridge between tenascin-C and the ECM, which increased deposition of tenascin-C on the ECM. The deposition of hexabrachions of tenascin-C may stabilize bifurcations of the ECM fibrils, which is integrated into the extracellular meshwork architecture. This study suggests a role for periostin in adaptation of the ECM architecture in the mechanical environment.

  16. Incorporation of Tenascin-C into the Extracellular Matrix by Periostin Underlies an Extracellular Meshwork Architecture*

    Science.gov (United States)

    Kii, Isao; Nishiyama, Takashi; Li, Minqi; Matsumoto, Ken-ichi; Saito, Mitsuru; Amizuka, Norio; Kudo, Akira

    2010-01-01

    Extracellular matrix (ECM) underlies a complicated multicellular architecture that is subjected to significant forces from mechanical environment. Although various components of the ECM have been enumerated, mechanisms that evolve the sophisticated ECM architecture remain to be addressed. Here we show that periostin, a matricellular protein, promotes incorporation of tenascin-C into the ECM and organizes a meshwork architecture of the ECM. We found that both periostin null mice and tenascin-C null mice exhibited a similar phenotype, confined tibial periostitis, which possibly corresponds to medial tibial stress syndrome in human sports injuries. Periostin possessed adjacent domains that bind to tenascin-C and the other ECM protein: fibronectin and type I collagen, respectively. These adjacent domains functioned as a bridge between tenascin-C and the ECM, which increased deposition of tenascin-C on the ECM. The deposition of hexabrachions of tenascin-C may stabilize bifurcations of the ECM fibrils, which is integrated into the extracellular meshwork architecture. This study suggests a role for periostin in adaptation of the ECM architecture in the mechanical environment. PMID:19887451

  17. Convolutional Neural Network for Histopathological Analysis of Osteosarcoma.

    Science.gov (United States)

    Mishra, Rashika; Daescu, Ovidiu; Leavey, Patrick; Rakheja, Dinesh; Sengupta, Anita

    2018-03-01

    Pathologists often deal with high complexity and sometimes disagreement over osteosarcoma tumor classification due to cellular heterogeneity in the dataset. Segmentation and classification of histology tissue in H&E stained tumor image datasets is a challenging task because of intra-class variations, inter-class similarity, crowded context, and noisy data. In recent years, deep learning approaches have led to encouraging results in breast cancer and prostate cancer analysis. In this article, we propose convolutional neural network (CNN) as a tool to improve efficiency and accuracy of osteosarcoma tumor classification into tumor classes (viable tumor, necrosis) versus nontumor. The proposed CNN architecture contains eight learned layers: three sets of stacked two convolutional layers interspersed with max pooling layers for feature extraction and two fully connected layers with data augmentation strategies to boost performance. The use of a neural network results in higher accuracy of average 92% for the classification. We compare the proposed architecture with three existing and proven CNN architectures for image classification: AlexNet, LeNet, and VGGNet. We also provide a pipeline to calculate percentage necrosis in a given whole slide image. We conclude that the use of neural networks can assure both high accuracy and efficiency in osteosarcoma classification.

  18. Using a neural network approach for muon reconstruction and triggering

    CERN Document Server

    Etzion, E; Abramowicz, H; Benhammou, Ya; Horn, D; Levinson, L; Livneh, R

    2004-01-01

    The extremely high rate of events that will be produced in the future Large Hadron Collider requires the triggering mechanism to take precise decisions in a few nano-seconds. We present a study which used an artificial neural network triggering algorithm and compared it to the performance of a dedicated electronic muon triggering system. Relatively simple architecture was used to solve a complicated inverse problem. A comparison with a realistic example of the ATLAS first level trigger simulation was in favour of the neural network. A similar architecture trained after the simulation of the electronics first trigger stage showed a further background rejection.

  19. Disrupted resting-state functional architecture of the brain after 45-day simulated microgravity

    Science.gov (United States)

    Zhou, Yuan; Wang, Yun; Rao, Li-Lin; Liang, Zhu-Yuan; Chen, Xiao-Ping; Zheng, Dang; Tan, Cheng; Tian, Zhi-Qiang; Wang, Chun-Hui; Bai, Yan-Qiang; Chen, Shan-Guang; Li, Shu

    2014-01-01

    Long-term spaceflight induces both physiological and psychological changes in astronauts. To understand the neural mechanisms underlying these physiological and psychological changes, it is critical to investigate the effects of microgravity on the functional architecture of the brain. In this study, we used resting-state functional MRI (rs-fMRI) to study whether the functional architecture of the brain is altered after 45 days of −6° head-down tilt (HDT) bed rest, which is a reliable model for the simulation of microgravity. Sixteen healthy male volunteers underwent rs-fMRI scans before and after 45 days of −6° HDT bed rest. Specifically, we used a commonly employed graph-based measure of network organization, i.e., degree centrality (DC), to perform a full-brain exploration of the regions that were influenced by simulated microgravity. We subsequently examined the functional connectivities of these regions using a seed-based resting-state functional connectivity (RSFC) analysis. We found decreased DC in two regions, the left anterior insula (aINS) and the anterior part of the middle cingulate cortex (MCC; also called the dorsal anterior cingulate cortex in many studies), in the male volunteers after 45 days of −6° HDT bed rest. Furthermore, seed-based RSFC analyses revealed that a functional network anchored in the aINS and MCC was particularly influenced by simulated microgravity. These results provide evidence that simulated microgravity alters the resting-state functional architecture of the brains of males and suggest that the processing of salience information, which is primarily subserved by the aINS–MCC functional network, is particularly influenced by spaceflight. The current findings provide a new perspective for understanding the relationships between microgravity, cognitive function, autonomic neural function, and central neural activity. PMID:24926242

  20. Disrutpted resting-state functional architecture of the brain after 45-day simulated microgravity

    Directory of Open Access Journals (Sweden)

    Yuan eZhou

    2014-06-01

    Full Text Available Long-term spaceflight induces both physiological and psychological changes in astronauts. To understand the neural mechanisms underlying these physiological and psychological changes, it is critical to investigate the effects of microgravity on the functional architecture of the brain. In this study, we used resting-state functional MRI (rs-fMRI to study whether the functional architecture of the brain is altered after 45 days of -6° head-down tilt (HDT bed rest, which is a reliable model for the simulation of microgravity. Sixteen healthy male volunteers underwent rs-fMRI scans before and after 45 days of -6° HDT bed rest. Specifically, we used a commonly employed graph-based measure of network organization, i.e., degree centrality (DC, to perform a full-brain exploration of the regions that were influenced by simulated microgravity. We subsequently examined the functional connectivities of these regions using a seed-based resting-state functional connectivity (RSFC analysis. We found decreased DC in two regions, the left anterior insula (aINS and the anterior part of the middle cingulate cortex (MCC; also called the dorsal anterior cingulate cortex in many studies, in the male volunteers after 45 days of -6° HDT bed rest. Furthermore, seed-based RSFC analyses revealed that a functional network anchored in the aINS and MCC was particularly influenced by simulated microgravity. These results provide evidence that simulated microgravity alters the resting-state functional architecture of the brains of males and suggest that the processing of salience information, which is primarily subserved by the aINS–MCC functional network, is particularly influenced by spaceflight. The current findings provide a new perspective for understanding the relationships between microgravity, cognitive function, autonomic neural function and central neural activity.

  1. Computational Models and Emergent Properties of Respiratory Neural Networks

    Science.gov (United States)

    Lindsey, Bruce G.; Rybak, Ilya A.; Smith, Jeffrey C.

    2012-01-01

    Computational models of the neural control system for breathing in mammals provide a theoretical and computational framework bringing together experimental data obtained from different animal preparations under various experimental conditions. Many of these models were developed in parallel and iteratively with experimental studies and provided predictions guiding new experiments. This data-driven modeling approach has advanced our understanding of respiratory network architecture and neural mechanisms underlying generation of the respiratory rhythm and pattern, including their functional reorganization under different physiological conditions. Models reviewed here vary in neurobiological details and computational complexity and span multiple spatiotemporal scales of respiratory control mechanisms. Recent models describe interacting populations of respiratory neurons spatially distributed within the Bötzinger and pre-Bötzinger complexes and rostral ventrolateral medulla that contain core circuits of the respiratory central pattern generator (CPG). Network interactions within these circuits along with intrinsic rhythmogenic properties of neurons form a hierarchy of multiple rhythm generation mechanisms. The functional expression of these mechanisms is controlled by input drives from other brainstem components, including the retrotrapezoid nucleus and pons, which regulate the dynamic behavior of the core circuitry. The emerging view is that the brainstem respiratory network has rhythmogenic capabilities at multiple levels of circuit organization. This allows flexible, state-dependent expression of different neural pattern-generation mechanisms under various physiological conditions, enabling a wide repertoire of respiratory behaviors. Some models consider control of the respiratory CPG by pulmonary feedback and network reconfiguration during defensive behaviors such as cough. Future directions in modeling of the respiratory CPG are considered. PMID:23687564

  2. Two distinct neural mechanisms underlying indirect reciprocity.

    Science.gov (United States)

    Watanabe, Takamitsu; Takezawa, Masanori; Nakawake, Yo; Kunimatsu, Akira; Yamasue, Hidenori; Nakamura, Mitsuhiro; Miyashita, Yasushi; Masuda, Naoki

    2014-03-18

    Cooperation is a hallmark of human society. Humans often cooperate with strangers even if they will not meet each other again. This so-called indirect reciprocity enables large-scale cooperation among nonkin and can occur based on a reputation mechanism or as a succession of pay-it-forward behavior. Here, we provide the functional and anatomical neural evidence for two distinct mechanisms governing the two types of indirect reciprocity. Cooperation occurring as reputation-based reciprocity specifically recruited the precuneus, a region associated with self-centered cognition. During such cooperative behavior, the precuneus was functionally connected with the caudate, a region linking rewards to behavior. Furthermore, the precuneus of a cooperative subject had a strong resting-state functional connectivity (rsFC) with the caudate and a large gray matter volume. In contrast, pay-it-forward reciprocity recruited the anterior insula (AI), a brain region associated with affective empathy. The AI was functionally connected with the caudate during cooperation occurring as pay-it-forward reciprocity, and its gray matter volume and rsFC with the caudate predicted the tendency of such cooperation. The revealed difference is consistent with the existing results of evolutionary game theory: although reputation-based indirect reciprocity robustly evolves as a self-interested behavior in theory, pay-it-forward indirect reciprocity does not on its own. The present study provides neural mechanisms underlying indirect reciprocity and suggests that pay-it-forward reciprocity may not occur as myopic profit maximization but elicit emotional rewards.

  3. Adaptive competitive learning neural networks

    Directory of Open Access Journals (Sweden)

    Ahmed R. Abas

    2013-11-01

    Full Text Available In this paper, the adaptive competitive learning (ACL neural network algorithm is proposed. This neural network not only groups similar input feature vectors together but also determines the appropriate number of groups of these vectors. This algorithm uses a new proposed criterion referred to as the ACL criterion. This criterion evaluates different clustering structures produced by the ACL neural network for an input data set. Then, it selects the best clustering structure and the corresponding network architecture for this data set. The selected structure is composed of the minimum number of clusters that are compact and balanced in their sizes. The selected network architecture is efficient, in terms of its complexity, as it contains the minimum number of neurons. Synaptic weight vectors of these neurons represent well-separated, compact and balanced clusters in the input data set. The performance of the ACL algorithm is evaluated and compared with the performance of a recently proposed algorithm in the literature in clustering an input data set and determining its number of clusters. Results show that the ACL algorithm is more accurate and robust in both determining the number of clusters and allocating input feature vectors into these clusters than the other algorithm especially with data sets that are sparsely distributed.

  4. An artificial neural network architecture for non-parametric visual odometry in wireless capsule endoscopy

    Science.gov (United States)

    Dimas, George; Iakovidis, Dimitris K.; Karargyris, Alexandros; Ciuti, Gastone; Koulaouzidis, Anastasios

    2017-09-01

    Wireless capsule endoscopy is a non-invasive screening procedure of the gastrointestinal (GI) tract performed with an ingestible capsule endoscope (CE) of the size of a large vitamin pill. Such endoscopes are equipped with a usually low-frame-rate color camera which enables the visualization of the GI lumen and the detection of pathologies. The localization of the commercially available CEs is performed in the 3D abdominal space using radio-frequency (RF) triangulation from external sensor arrays, in combination with transit time estimation. State-of-the-art approaches, such as magnetic localization, which have been experimentally proved more accurate than the RF approach, are still at an early stage. Recently, we have demonstrated that CE localization is feasible using solely visual cues and geometric models. However, such approaches depend on camera parameters, many of which are unknown. In this paper the authors propose a novel non-parametric visual odometry (VO) approach to CE localization based on a feed-forward neural network architecture. The effectiveness of this approach in comparison to state-of-the-art geometric VO approaches is validated using a robotic-assisted in vitro experimental setup.

  5. An artificial neural network architecture for non-parametric visual odometry in wireless capsule endoscopy

    International Nuclear Information System (INIS)

    Dimas, George; Iakovidis, Dimitris K; Karargyris, Alexandros; Ciuti, Gastone; Koulaouzidis, Anastasios

    2017-01-01

    Wireless capsule endoscopy is a non-invasive screening procedure of the gastrointestinal (GI) tract performed with an ingestible capsule endoscope (CE) of the size of a large vitamin pill. Such endoscopes are equipped with a usually low-frame-rate color camera which enables the visualization of the GI lumen and the detection of pathologies. The localization of the commercially available CEs is performed in the 3D abdominal space using radio-frequency (RF) triangulation from external sensor arrays, in combination with transit time estimation. State-of-the-art approaches, such as magnetic localization, which have been experimentally proved more accurate than the RF approach, are still at an early stage. Recently, we have demonstrated that CE localization is feasible using solely visual cues and geometric models. However, such approaches depend on camera parameters, many of which are unknown. In this paper the authors propose a novel non-parametric visual odometry (VO) approach to CE localization based on a feed-forward neural network architecture. The effectiveness of this approach in comparison to state-of-the-art geometric VO approaches is validated using a robotic-assisted in vitro experimental setup. (paper)

  6. Drift chamber tracking with neural networks

    International Nuclear Information System (INIS)

    Lindsey, C.S.; Denby, B.; Haggerty, H.

    1992-10-01

    We discuss drift chamber tracking with a commercial log VLSI neural network chip. Voltages proportional to the drift times in a 4-layer drift chamber were presented to the Intel ETANN chip. The network was trained to provide the intercept and slope of straight tracks traversing the chamber. The outputs were recorded and later compared off line to conventional track fits. Two types of network architectures were studied. Applications of neural network tracking to high energy physics detector triggers is discussed

  7. A neural network approach to burst detection.

    Science.gov (United States)

    Mounce, S R; Day, A J; Wood, A S; Khan, A; Widdop, P D; Machell, J

    2002-01-01

    This paper describes how hydraulic and water quality data from a distribution network may be used to provide a more efficient leakage management capability for the water industry. The research presented concerns the application of artificial neural networks to the issue of detection and location of leakage in treated water distribution systems. An architecture for an Artificial Neural Network (ANN) based system is outlined. The neural network uses time series data produced by sensors to directly construct an empirical model for predication and classification of leaks. Results are presented using data from an experimental site in Yorkshire Water's Keighley distribution system.

  8. Neural correlates underlying musical semantic memory.

    Science.gov (United States)

    Groussard, M; Viader, F; Landeau, B; Desgranges, B; Eustache, F; Platel, H

    2009-07-01

    Numerous functional imaging studies have examined the neural basis of semantic memory mainly using verbal and visuospatial materials. Musical material also allows an original way to explore semantic memory processes. We used PET imaging to determine the neural substrates that underlie musical semantic memory using different tasks and stimuli. The results of three PET studies revealed a greater involvement of the anterior part of the temporal lobe. Concerning clinical observations and our neuroimaging data, the musical lexicon (and most widely musical semantic memory) appears to be sustained by a temporo-prefrontal cerebral network involving right and left cerebral regions.

  9. Neural changes underlying early stages of L2 vocabulary acquisition.

    Science.gov (United States)

    Pu, He; Holcomb, Phillip J; Midgley, Katherine J

    2016-11-01

    Research has shown neural changes following second language (L2) acquisition after weeks or months of instruction. But are such changes detectable even earlier than previously shown? The present study examines the electrophysiological changes underlying the earliest stages of second language vocabulary acquisition by recording event-related potentials (ERPs) within the first week of learning. Adult native English speakers with no previous Spanish experience completed less than four hours of Spanish vocabulary training, with pre- and post-training ERPs recorded to a backward translation task. Results indicate that beginning L2 learners show rapid neural changes following learning, manifested in changes to the N400 - an ERP component sensitive to lexicosemantic processing and degree of L2 proficiency. Specifically, learners in early stages of L2 acquisition show growth in N400 amplitude to L2 words following learning as well as a backward translation N400 priming effect that was absent pre-training. These results were shown within days of minimal L2 training, suggesting that the neural changes captured during adult second language acquisition are more rapid than previously shown. Such findings are consistent with models of early stages of bilingualism in adult learners of L2 ( e.g. Kroll and Stewart's RHM) and reinforce the use of ERP measures to assess L2 learning.

  10. Neural mechanisms underlying sensitivity to reverse-phi motion in the fly

    Science.gov (United States)

    Meier, Matthias; Serbe, Etienne; Eichner, Hubert; Borst, Alexander

    2017-01-01

    Optical illusions provide powerful tools for mapping the algorithms and circuits that underlie visual processing, revealing structure through atypical function. Of particular note in the study of motion detection has been the reverse-phi illusion. When contrast reversals accompany discrete movement, detected direction tends to invert. This occurs across a wide range of organisms, spanning humans and invertebrates. Here, we map an algorithmic account of the phenomenon onto neural circuitry in the fruit fly Drosophila melanogaster. Through targeted silencing experiments in tethered walking flies as well as electrophysiology and calcium imaging, we demonstrate that ON- or OFF-selective local motion detector cells T4 and T5 are sensitive to certain interactions between ON and OFF. A biologically plausible detector model accounts for subtle features of this particular form of illusory motion reversal, like the re-inversion of turning responses occurring at extreme stimulus velocities. In light of comparable circuit architecture in the mammalian retina, we suggest that similar mechanisms may apply even to human psychophysics. PMID:29261684

  11. Neural correlate of resting-state functional connectivity under α2 adrenergic receptor agonist, medetomidine.

    Science.gov (United States)

    Nasrallah, Fatima A; Lew, Si Kang; Low, Amanda Si-Min; Chuang, Kai-Hsiang

    2014-01-01

    Correlative fluctuations in functional MRI (fMRI) signals across the brain at rest have been taken as a measure of functional connectivity, but the neural basis of this resting-state MRI (rsMRI) signal is not clear. Previously, we found that the α2 adrenergic agonist, medetomidine, suppressed the rsMRI correlation dose-dependently but not the stimulus evoked activation. To understand the underlying electrophysiology and neurovascular coupling, which might be altered due to the vasoconstrictive nature of medetomidine, somatosensory evoked potential (SEP) and resting electroencephalography (EEG) were measured and correlated with corresponding BOLD signals in rat brains under three dosages of medetomidine. The SEP elicited by electrical stimulation to both forepaws was unchanged regardless of medetomidine dosage, which was consistent with the BOLD activation. Identical relationship between the SEP and BOLD signal under different medetomidine dosages indicates that the neurovascular coupling was not affected. Under resting state, EEG power was the same but a depression of inter-hemispheric EEG coherence in the gamma band was observed at higher medetomidine dosage. Different from medetomidine, both resting EEG power and BOLD power and coherence were significantly suppressed with increased isoflurane level. Such reduction was likely due to suppressed neural activity as shown by diminished SEP and BOLD activation under isoflurane, suggesting different mechanisms of losing synchrony at resting-state. Even though, similarity between electrophysiology and BOLD under stimulation and resting-state implicates a tight neurovascular coupling in both medetomidine and isoflurane. Our results confirm that medetomidine does not suppress neural activity but dissociates connectivity in the somatosensory cortex. The differential effect of medetomidine and its receptor specific action supports the neuronal origin of functional connectivity and implicates the mechanism of its sedative

  12. SoxB1-driven transcriptional network underlies neural-specific interpretation of morphogen signals.

    Science.gov (United States)

    Oosterveen, Tony; Kurdija, Sanja; Ensterö, Mats; Uhde, Christopher W; Bergsland, Maria; Sandberg, Magnus; Sandberg, Rickard; Muhr, Jonas; Ericson, Johan

    2013-04-30

    The reiterative deployment of a small cadre of morphogen signals underlies patterning and growth of most tissues during embyogenesis, but how such inductive events result in tissue-specific responses remains poorly understood. By characterizing cis-regulatory modules (CRMs) associated with genes regulated by Sonic hedgehog (Shh), retinoids, or bone morphogenetic proteins in the CNS, we provide evidence that the neural-specific interpretation of morphogen signaling reflects a direct integration of these pathways with SoxB1 proteins at the CRM level. Moreover, expression of SoxB1 proteins in the limb bud confers on mesodermal cells the potential to activate neural-specific target genes upon Shh, retinoid, or bone morphogenetic protein signaling, and the collocation of binding sites for SoxB1 and morphogen-mediatory transcription factors in CRMs faithfully predicts neural-specific gene activity. Thus, an unexpectedly simple transcriptional paradigm appears to conceptually explain the neural-specific interpretation of pleiotropic signaling during vertebrate development. Importantly, genes induced in a SoxB1-dependent manner appear to constitute repressive gene regulatory networks that are directly interlinked at the CRM level to constrain the regional expression of patterning genes. Accordingly, not only does the topology of SoxB1-driven gene regulatory networks provide a tissue-specific mode of gene activation, but it also determines the spatial expression pattern of target genes within the developing neural tube.

  13. Recurrent Neural Network Based Boolean Factor Analysis and its Application to Word Clustering

    Czech Academy of Sciences Publication Activity Database

    Frolov, A. A.; Húsek, Dušan; Polyakov, P.Y.

    2009-01-01

    Roč. 20, č. 7 (2009), s. 1073-1086 ISSN 1045-9227 R&D Projects: GA MŠk(CZ) 1M0567 Institutional research plan: CEZ:AV0Z10300504 Keywords : recurrent neural network * Hopfield-like neural network * associative memory * unsupervised learning * neural network architecture * neural network application * statistics * Boolean factor analysis * concepts search * information retrieval Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 2.889, year: 2009

  14. Neural correlates underlying micrographia in Parkinson's disease.

    Science.gov (United States)

    Wu, Tao; Zhang, Jiarong; Hallett, Mark; Feng, Tao; Hou, Yanan; Chan, Piu

    2016-01-01

    Micrographia is a common symptom in Parkinson's disease, which manifests as either a consistent or progressive reduction in the size of handwriting or both. Neural correlates underlying micrographia remain unclear. We used functional magnetic resonance imaging to investigate micrographia-related neural activity and connectivity modulations. In addition, the effect of attention and dopaminergic administration on micrographia was examined. We found that consistent micrographia was associated with decreased activity and connectivity in the basal ganglia motor circuit; while progressive micrographia was related to the dysfunction of basal ganglia motor circuit together with disconnections between the rostral supplementary motor area, rostral cingulate motor area and cerebellum. Attention significantly improved both consistent and progressive micrographia, accompanied by recruitment of anterior putamen and dorsolateral prefrontal cortex. Levodopa improved consistent micrographia accompanied by increased activity and connectivity in the basal ganglia motor circuit, but had no effect on progressive micrographia. Our findings suggest that consistent micrographia is related to dysfunction of the basal ganglia motor circuit; while dysfunction of the basal ganglia motor circuit and disconnection between the rostral supplementary motor area, rostral cingulate motor area and cerebellum likely contributes to progressive micrographia. Attention improves both types of micrographia by recruiting additional brain networks. Levodopa improves consistent micrographia by restoring the function of the basal ganglia motor circuit, but does not improve progressive micrographia, probably because of failure to repair the disconnected networks. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  15. Unfolding code for neutron spectrometry based on neural nets technology

    International Nuclear Information System (INIS)

    Ortiz R, J. M.; Vega C, H. R.

    2012-10-01

    The most delicate part of neutron spectrometry, is the unfolding process. The derivation of the spectral information is not simple because the unknown is not given directly as a result of the measurements. The drawbacks associated with traditional unfolding procedures have motivated the need of complementary approaches. Novel methods based on Artificial Neural Networks have been widely investigated. In this work, a neutron spectrum unfolding code based on neural nets technology is presented. This unfolding code called Neutron Spectrometry and Dosimetry by means of Artificial Neural Networks was designed in a graphical interface under LabVIEW programming environment. The core of the code is an embedded neural network architecture, previously optimized by the R obust Design of Artificial Neural Networks Methodology . The main features of the code are: is easy to use, friendly and intuitive to the user. This code was designed for a Bonner Sphere System based on a 6 Lil(Eu) neutron detector and a response matrix expressed in 60 energy bins taken from an International Atomic Energy Agency compilation. The main feature of the code is that as entrance data, only seven rate counts measurement with a Bonner spheres spectrometer are required for simultaneously unfold the 60 energy bins of the neutron spectrum and to calculate 15 dosimetric quantities, for radiation protection porpoises. This code generates a full report in html format with all relevant information. (Author)

  16. Dynamic Adaptive Neural Network Arrays: A Neuromorphic Architecture

    Energy Technology Data Exchange (ETDEWEB)

    Disney, Adam [University of Tennessee (UT); Reynolds, John [University of Tennessee (UT)

    2015-01-01

    Dynamic Adaptive Neural Network Array (DANNA) is a neuromorphic hardware implementation. It differs from most other neuromorphic projects in that it allows for programmability of structure, and it is trained or designed using evolutionary optimization. This paper describes the DANNA structure, how DANNA is trained using evolutionary optimization, and an application of DANNA to a very simple classification task.

  17. Microscale Architecture in Biomaterial Scaffolds for Spatial Control of Neural Cell Behavior

    Directory of Open Access Journals (Sweden)

    Edi Meco

    2018-02-01

    Full Text Available Biomaterial scaffolds mimic aspects of the native central nervous system (CNS extracellular matrix (ECM and have been extensively utilized to influence neural cell (NC behavior in in vitro and in vivo settings. These biomimetic scaffolds support NC cultures, can direct the differentiation of NCs, and have recapitulated some native NC behavior in an in vitro setting. However, NC transplant therapies and treatments used in animal models of CNS disease and injury have not fully restored functionality. The observed lack of functional recovery occurs despite improvements in transplanted NC viability when incorporating biomaterial scaffolds and the potential of NC to replace damaged native cells. The behavior of NCs within biomaterial scaffolds must be directed in order to improve the efficacy of transplant therapies and treatments. Biomaterial scaffold topography and imbedded bioactive cues, designed at the microscale level, can alter NC phenotype, direct migration, and differentiation. Microscale patterning in biomaterial scaffolds for spatial control of NC behavior has enhanced the capabilities of in vitro models to capture properties of the native CNS tissue ECM. Patterning techniques such as lithography, electrospinning and three-dimensional (3D bioprinting can be employed to design the microscale architecture of biomaterial scaffolds. Here, the progress and challenges of the prevalent biomaterial patterning techniques of lithography, electrospinning, and 3D bioprinting are reported. This review analyzes NC behavioral response to specific microscale topographical patterns and spatially organized bioactive cues.

  18. Neural correlates underlying micrographia in Parkinson’s disease

    Science.gov (United States)

    Zhang, Jiarong; Hallett, Mark; Feng, Tao; Hou, Yanan; Chan, Piu

    2016-01-01

    Micrographia is a common symptom in Parkinson’s disease, which manifests as either a consistent or progressive reduction in the size of handwriting or both. Neural correlates underlying micrographia remain unclear. We used functional magnetic resonance imaging to investigate micrographia-related neural activity and connectivity modulations. In addition, the effect of attention and dopaminergic administration on micrographia was examined. We found that consistent micrographia was associated with decreased activity and connectivity in the basal ganglia motor circuit; while progressive micrographia was related to the dysfunction of basal ganglia motor circuit together with disconnections between the rostral supplementary motor area, rostral cingulate motor area and cerebellum. Attention significantly improved both consistent and progressive micrographia, accompanied by recruitment of anterior putamen and dorsolateral prefrontal cortex. Levodopa improved consistent micrographia accompanied by increased activity and connectivity in the basal ganglia motor circuit, but had no effect on progressive micrographia. Our findings suggest that consistent micrographia is related to dysfunction of the basal ganglia motor circuit; while dysfunction of the basal ganglia motor circuit and disconnection between the rostral supplementary motor area, rostral cingulate motor area and cerebellum likely contributes to progressive micrographia. Attention improves both types of micrographia by recruiting additional brain networks. Levodopa improves consistent micrographia by restoring the function of the basal ganglia motor circuit, but does not improve progressive micrographia, probably because of failure to repair the disconnected networks. PMID:26525918

  19. Handedness is related to neural mechanisms underlying hemispheric lateralization of face processing

    Science.gov (United States)

    Frässle, Stefan; Krach, Sören; Paulus, Frieder Michel; Jansen, Andreas

    2016-06-01

    While the right-hemispheric lateralization of the face perception network is well established, recent evidence suggests that handedness affects the cerebral lateralization of face processing at the hierarchical level of the fusiform face area (FFA). However, the neural mechanisms underlying differential hemispheric lateralization of face perception in right- and left-handers are largely unknown. Using dynamic causal modeling (DCM) for fMRI, we aimed to unravel the putative processes that mediate handedness-related differences by investigating the effective connectivity in the bilateral core face perception network. Our results reveal an enhanced recruitment of the left FFA in left-handers compared to right-handers, as evidenced by more pronounced face-specific modulatory influences on both intra- and interhemispheric connections. As structural and physiological correlates of handedness-related differences in face processing, right- and left-handers varied with regard to their gray matter volume in the left fusiform gyrus and their pupil responses to face stimuli. Overall, these results describe how handedness is related to the lateralization of the core face perception network, and point to different neural mechanisms underlying face processing in right- and left-handers. In a wider context, this demonstrates the entanglement of structurally and functionally remote brain networks, suggesting a broader underlying process regulating brain lateralization.

  20. Influence of neural adaptation on dynamics and equilibrium state of neural activities in a ring neural network

    Science.gov (United States)

    Takiyama, Ken

    2017-12-01

    How neural adaptation affects neural information processing (i.e. the dynamics and equilibrium state of neural activities) is a central question in computational neuroscience. In my previous works, I analytically clarified the dynamics and equilibrium state of neural activities in a ring-type neural network model that is widely used to model the visual cortex, motor cortex, and several other brain regions. The neural dynamics and the equilibrium state in the neural network model corresponded to a Bayesian computation and statistically optimal multiple information integration, respectively, under a biologically inspired condition. These results were revealed in an analytically tractable manner; however, adaptation effects were not considered. Here, I analytically reveal how the dynamics and equilibrium state of neural activities in a ring neural network are influenced by spike-frequency adaptation (SFA). SFA is an adaptation that causes gradual inhibition of neural activity when a sustained stimulus is applied, and the strength of this inhibition depends on neural activities. I reveal that SFA plays three roles: (1) SFA amplifies the influence of external input in neural dynamics; (2) SFA allows the history of the external input to affect neural dynamics; and (3) the equilibrium state corresponds to the statistically optimal multiple information integration independent of the existence of SFA. In addition, the equilibrium state in a ring neural network model corresponds to the statistically optimal integration of multiple information sources under biologically inspired conditions, independent of the existence of SFA.

  1. A stochastic learning algorithm for layered neural networks

    International Nuclear Information System (INIS)

    Bartlett, E.B.; Uhrig, R.E.

    1992-01-01

    The random optimization method typically uses a Gaussian probability density function (PDF) to generate a random search vector. In this paper the random search technique is applied to the neural network training problem and is modified to dynamically seek out the optimal probability density function (OPDF) from which to select the search vector. The dynamic OPDF search process, combined with an auto-adaptive stratified sampling technique and a dynamic node architecture (DNA) learning scheme, completes the modifications of the basic method. The DNA technique determines the appropriate number of hidden nodes needed for a given training problem. By using DNA, researchers do not have to set the neural network architectures before training is initiated. The approach is applied to networks of generalized, fully interconnected, continuous perceptions. Computer simulation results are given

  2. Invariant recognition drives neural representations of action sequences.

    Directory of Open Access Journals (Sweden)

    Andrea Tacchetti

    2017-12-01

    Full Text Available Recognizing the actions of others from visual stimuli is a crucial aspect of human perception that allows individuals to respond to social cues. Humans are able to discriminate between similar actions despite transformations, like changes in viewpoint or actor, that substantially alter the visual appearance of a scene. This ability to generalize across complex transformations is a hallmark of human visual intelligence. Advances in understanding action recognition at the neural level have not always translated into precise accounts of the computational principles underlying what representations of action sequences are constructed by human visual cortex. Here we test the hypothesis that invariant action discrimination might fill this gap. Recently, the study of artificial systems for static object perception has produced models, Convolutional Neural Networks (CNNs, that achieve human level performance in complex discriminative tasks. Within this class, architectures that better support invariant object recognition also produce image representations that better match those implied by human and primate neural data. However, whether these models produce representations of action sequences that support recognition across complex transformations and closely follow neural representations of actions remains unknown. Here we show that spatiotemporal CNNs accurately categorize video stimuli into action classes, and that deliberate model modifications that improve performance on an invariant action recognition task lead to data representations that better match human neural recordings. Our results support our hypothesis that performance on invariant discrimination dictates the neural representations of actions computed in the brain. These results broaden the scope of the invariant recognition framework for understanding visual intelligence from perception of inanimate objects and faces in static images to the study of human perception of action sequences.

  3. USC orthogonal multiprocessor for image processing with neural networks

    Science.gov (United States)

    Hwang, Kai; Panda, Dhabaleswar K.; Haddadi, Navid

    1990-07-01

    This paper presents the architectural features and imaging applications of the Orthogonal MultiProcessor (OMP) system, which is under construction at the University of Southern California with research funding from NSF and assistance from several industrial partners. The prototype OMP is being built with 16 Intel i860 RISC microprocessors and 256 parallel memory modules using custom-designed spanning buses, which are 2-D interleaved and orthogonally accessed without conflicts. The 16-processor OMP prototype is targeted to achieve 430 MIPS and 600 Mflops, which have been verified by simulation experiments based on the design parameters used. The prototype OMP machine will be initially applied for image processing, computer vision, and neural network simulation applications. We summarize important vision and imaging algorithms that can be restructured with neural network models. These algorithms can efficiently run on the OMP hardware with linear speedup. The ultimate goal is to develop a high-performance Visual Computer (Viscom) for integrated low- and high-level image processing and vision tasks.

  4. Neural network decoder for quantum error correcting codes

    Science.gov (United States)

    Krastanov, Stefan; Jiang, Liang

    Artificial neural networks form a family of extremely powerful - albeit still poorly understood - tools used in anything from image and sound recognition through text generation to, in our case, decoding. We present a straightforward Recurrent Neural Network architecture capable of deducing the correcting procedure for a quantum error-correcting code from a set of repeated stabilizer measurements. We discuss the fault-tolerance of our scheme and the cost of training the neural network for a system of a realistic size. Such decoders are especially interesting when applied to codes, like the quantum LDPC codes, that lack known efficient decoding schemes.

  5. A Reconfigurable and Biologically Inspired Paradigm for Computation Using Network-On-Chip and Spiking Neural Networks

    Directory of Open Access Journals (Sweden)

    Jim Harkin

    2009-01-01

    Full Text Available FPGA devices have emerged as a popular platform for the rapid prototyping of biological Spiking Neural Networks (SNNs applications, offering the key requirement of reconfigurability. However, FPGAs do not efficiently realise the biologically plausible neuron and synaptic models of SNNs, and current FPGA routing structures cannot accommodate the high levels of interneuron connectivity inherent in complex SNNs. This paper highlights and discusses the current challenges of implementing scalable SNNs on reconfigurable FPGAs. The paper proposes a novel field programmable neural network architecture (EMBRACE, incorporating low-power analogue spiking neurons, interconnected using a Network-on-Chip architecture. Results on the evaluation of the EMBRACE architecture using the XOR benchmark problem are presented, and the performance of the architecture is discussed. The paper also discusses the adaptability of the EMBRACE architecture in supporting fault tolerant computing.

  6. Optimized Neural Network for Fault Diagnosis and Classification

    International Nuclear Information System (INIS)

    Elaraby, S.M.

    2005-01-01

    This paper presents a developed and implemented toolbox for optimizing neural network structure of fault diagnosis and classification. Evolutionary algorithm based on hierarchical genetic algorithm structure is used for optimization. The simplest feed-forward neural network architecture is selected. Developed toolbox has friendly user interface. Multiple solutions are generated. The performance and applicability of the proposed toolbox is verified with benchmark data patterns and accident diagnosis of Egyptian Second research reactor (ETRR-2)

  7. Learning in Artificial Neural Systems

    Science.gov (United States)

    Matheus, Christopher J.; Hohensee, William E.

    1987-01-01

    This paper presents an overview and analysis of learning in Artificial Neural Systems (ANS's). It begins with a general introduction to neural networks and connectionist approaches to information processing. The basis for learning in ANS's is then described, and compared with classical Machine learning. While similar in some ways, ANS learning deviates from tradition in its dependence on the modification of individual weights to bring about changes in a knowledge representation distributed across connections in a network. This unique form of learning is analyzed from two aspects: the selection of an appropriate network architecture for representing the problem, and the choice of a suitable learning rule capable of reproducing the desired function within the given network. The various network architectures are classified, and then identified with explicit restrictions on the types of functions they are capable of representing. The learning rules, i.e., algorithms that specify how the network weights are modified, are similarly taxonomized, and where possible, the limitations inherent to specific classes of rules are outlined.

  8. Prediction of Aerodynamic Coefficients for Wind Tunnel Data using a Genetic Algorithm Optimized Neural Network

    Science.gov (United States)

    Rajkumar, T.; Aragon, Cecilia; Bardina, Jorge; Britten, Roy

    2002-01-01

    A fast, reliable way of predicting aerodynamic coefficients is produced using a neural network optimized by a genetic algorithm. Basic aerodynamic coefficients (e.g. lift, drag, pitching moment) are modelled as functions of angle of attack and Mach number. The neural network is first trained on a relatively rich set of data from wind tunnel tests of numerical simulations to learn an overall model. Most of the aerodynamic parameters can be well-fitted using polynomial functions. A new set of data, which can be relatively sparse, is then supplied to the network to produce a new model consistent with the previous model and the new data. Because the new model interpolates realistically between the sparse test data points, it is suitable for use in piloted simulations. The genetic algorithm is used to choose a neural network architecture to give best results, avoiding over-and under-fitting of the test data.

  9. Decision Making under Uncertainty: A Neural Model based on Partially Observable Markov Decision Processes

    Directory of Open Access Journals (Sweden)

    Rajesh P N Rao

    2010-11-01

    Full Text Available A fundamental problem faced by animals is learning to select actions based on noisy sensory information and incomplete knowledge of the world. It has been suggested that the brain engages in Bayesian inference during perception but how such probabilistic representations are used to select actions has remained unclear. Here we propose a neural model of action selection and decision making based on the theory of partially observable Markov decision processes (POMDPs. Actions are selected based not on a single optimal estimate of state but on the posterior distribution over states (the belief state. We show how such a model provides a unified framework for explaining experimental results in decision making that involve both information gathering and overt actions. The model utilizes temporal difference (TD learning for maximizing expected reward. The resulting neural architecture posits an active role for the neocortex in belief computation while ascribing a role to the basal ganglia in belief representation, value computation, and action selection. When applied to the random dots motion discrimination task, model neurons representing belief exhibit responses similar to those of LIP neurons in primate neocortex. The appropriate threshold for switching from information gathering to overt actions emerges naturally during reward maximization. Additionally, the time course of reward prediction error in the model shares similarities with dopaminergic responses in the basal ganglia during the random dots task. For tasks with a deadline, the model learns a decision making strategy that changes with elapsed time, predicting a collapsing decision threshold consistent with some experimental studies. The model provides a new framework for understanding neural decision making and suggests an important role for interactions between the neocortex and the basal ganglia in learning the mapping between probabilistic sensory representations and actions that maximize

  10. Investigation of support vector machine for the detection of architectural distortion in mammographic images

    International Nuclear Information System (INIS)

    Guo, Q; Shao, J; Ruiz, V

    2005-01-01

    This paper investigates detection of architectural distortion in mammographic images using support vector machine. Hausdorff dimension is used to characterise the texture feature of mammographic images. Support vector machine, a learning machine based on statistical learning theory, is trained through supervised learning to detect architectural distortion. Compared to the Radial Basis Function neural networks, SVM produced more accurate classification results in distinguishing architectural distortion abnormality from normal breast parenchyma

  11. Investigation of support vector machine for the detection of architectural distortion in mammographic images

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Q [Department of Cybernetics, University of Reading, Reading RG6 6AY (United Kingdom); Shao, J [Department of Electronics, University of Kent at Canterbury, Kent CT2 7NT (United Kingdom); Ruiz, V [Department of Cybernetics, University of Reading, Reading RG6 6AY (United Kingdom)

    2005-01-01

    This paper investigates detection of architectural distortion in mammographic images using support vector machine. Hausdorff dimension is used to characterise the texture feature of mammographic images. Support vector machine, a learning machine based on statistical learning theory, is trained through supervised learning to detect architectural distortion. Compared to the Radial Basis Function neural networks, SVM produced more accurate classification results in distinguishing architectural distortion abnormality from normal breast parenchyma.

  12. Interpretation of correlated neural variability from models of feed-forward and recurrent circuits.

    Directory of Open Access Journals (Sweden)

    Volker Pernice

    2018-02-01

    Full Text Available Neural populations respond to the repeated presentations of a sensory stimulus with correlated variability. These correlations have been studied in detail, with respect to their mechanistic origin, as well as their influence on stimulus discrimination and on the performance of population codes. A number of theoretical studies have endeavored to link network architecture to the nature of the correlations in neural activity. Here, we contribute to this effort: in models of circuits of stochastic neurons, we elucidate the implications of various network architectures-recurrent connections, shared feed-forward projections, and shared gain fluctuations-on the stimulus dependence in correlations. Specifically, we derive mathematical relations that specify the dependence of population-averaged covariances on firing rates, for different network architectures. In turn, these relations can be used to analyze data on population activity. We examine recordings from neural populations in mouse auditory cortex. We find that a recurrent network model with random effective connections captures the observed statistics. Furthermore, using our circuit model, we investigate the relation between network parameters, correlations, and how well different stimuli can be discriminated from one another based on the population activity. As such, our approach allows us to relate properties of the neural circuit to information processing.

  13. A neural learning classifier system with self-adaptive constructivism for mobile robot control.

    Science.gov (United States)

    Hurst, Jacob; Bull, Larry

    2006-01-01

    For artificial entities to achieve true autonomy and display complex lifelike behavior, they will need to exploit appropriate adaptable learning algorithms. In this context adaptability implies flexibility guided by the environment at any given time and an open-ended ability to learn appropriate behaviors. This article examines the use of constructivism-inspired mechanisms within a neural learning classifier system architecture that exploits parameter self-adaptation as an approach to realize such behavior. The system uses a rule structure in which each rule is represented by an artificial neural network. It is shown that appropriate internal rule complexity emerges during learning at a rate controlled by the learner and that the structure indicates underlying features of the task. Results are presented in simulated mazes before moving to a mobile robot platform.

  14. Modelling and prediction for chaotic fir laser attractor using rational function neural network.

    Science.gov (United States)

    Cho, S

    2001-02-01

    Many real-world systems such as irregular ECG signal, volatility of currency exchange rate and heated fluid reaction exhibit highly complex nonlinear characteristic known as chaos. These chaotic systems cannot be retreated satisfactorily using linear system theory due to its high dimensionality and irregularity. This research focuses on prediction and modelling of chaotic FIR (Far InfraRed) laser system for which the underlying equations are not given. This paper proposed a method for prediction and modelling a chaotic FIR laser time series using rational function neural network. Three network architectures, TDNN (Time Delayed Neural Network), RBF (radial basis function) network and the RF (rational function) network, are also presented. Comparisons between these networks performance show the improvements introduced by the RF network in terms of a decrement in network complexity and better ability of predictability.

  15. Suppression of anomalous synchronization and nonstationary behavior of neural network under small-world topology

    Science.gov (United States)

    Boaretto, B. R. R.; Budzinski, R. C.; Prado, T. L.; Kurths, J.; Lopes, S. R.

    2018-05-01

    It is known that neural networks under small-world topology can present anomalous synchronization and nonstationary behavior for weak coupling regimes. Here, we propose methods to suppress the anomalous synchronization and also to diminish the nonstationary behavior occurring in weakly coupled neural network under small-world topology. We consider a network of 2000 thermally sensitive identical neurons, based on the model of Hodgkin-Huxley in a small-world topology, with the probability of adding non local connection equal to p = 0 . 001. Based on experimental protocols to suppress anomalous synchronization, as well as nonstationary behavior of the neural network dynamics, we make use of (i) external stimulus (pulsed current); (ii) biologic parameters changing (neuron membrane conductance changes); and (iii) body temperature changes. Quantification analysis to evaluate phase synchronization makes use of the Kuramoto's order parameter, while recurrence quantification analysis, particularly the determinism, computed over the easily accessible mean field of network, the local field potential (LFP), is used to evaluate nonstationary states. We show that the methods proposed can control the anomalous synchronization and nonstationarity occurring for weak coupling parameter without any effect on the individual neuron dynamics, neither in the expected asymptotic synchronized states occurring for large values of the coupling parameter.

  16. The characteristic patterns of neuronal avalanches in mice under anesthesia and at rest: An investigation using constrained artificial neural networks

    Science.gov (United States)

    Knöpfel, Thomas; Leech, Robert

    2018-01-01

    Local perturbations within complex dynamical systems can trigger cascade-like events that spread across significant portions of the system. Cascades of this type have been observed across a broad range of scales in the brain. Studies of these cascades, known as neuronal avalanches, usually report the statistics of large numbers of avalanches, without probing the characteristic patterns produced by the avalanches themselves. This is partly due to limitations in the extent or spatiotemporal resolution of commonly used neuroimaging techniques. In this study, we overcome these limitations by using optical voltage (genetically encoded voltage indicators) imaging. This allows us to record cortical activity in vivo across an entire cortical hemisphere, at both high spatial (~30um) and temporal (~20ms) resolution in mice that are either in an anesthetized or awake state. We then use artificial neural networks to identify the characteristic patterns created by neuronal avalanches in our data. The avalanches in the anesthetized cortex are most accurately classified by an artificial neural network architecture that simultaneously connects spatial and temporal information. This is in contrast with the awake cortex, in which avalanches are most accurately classified by an architecture that treats spatial and temporal information separately, due to the increased levels of spatiotemporal complexity. This is in keeping with reports of higher levels of spatiotemporal complexity in the awake brain coinciding with features of a dynamical system operating close to criticality. PMID:29795654

  17. Fuzzy-Neural Controller in Service Requests Distribution Broker for SOA-Based Systems

    Science.gov (United States)

    Fras, Mariusz; Zatwarnicka, Anna; Zatwarnicki, Krzysztof

    The evolution of software architectures led to the rising importance of the Service Oriented Architecture (SOA) concept. This architecture paradigm support building flexible distributed service systems. In the paper the architecture of service request distribution broker designed for use in SOA-based systems is proposed. The broker is built with idea of fuzzy control. The functional and non-functional request requirements in conjunction with monitoring of execution and communication links are used to distribute requests. Decisions are made with use of fuzzy-neural network.

  18. Semantic segmentation of bioimages using convolutional neural networks

    CSIR Research Space (South Africa)

    Wiehman, S

    2016-07-01

    Full Text Available Convolutional neural networks have shown great promise in both general image segmentation problems as well as bioimage segmentation. In this paper, the application of different convolutional network architectures is explored on the C. elegans live...

  19. Quantitative phase microscopy using deep neural networks

    Science.gov (United States)

    Li, Shuai; Sinha, Ayan; Lee, Justin; Barbastathis, George

    2018-02-01

    Deep learning has been proven to achieve ground-breaking accuracy in various tasks. In this paper, we implemented a deep neural network (DNN) to achieve phase retrieval in a wide-field microscope. Our DNN utilized the residual neural network (ResNet) architecture and was trained using the data generated by a phase SLM. The results showed that our DNN was able to reconstruct the profile of the phase target qualitatively. In the meantime, large error still existed, which indicated that our approach still need to be improved.

  20. One-day-ahead streamflow forecasting via super-ensembles of several neural network architectures based on the Multi-Level Diversity Model

    Science.gov (United States)

    Brochero, Darwin; Hajji, Islem; Pina, Jasson; Plana, Queralt; Sylvain, Jean-Daniel; Vergeynst, Jenna; Anctil, Francois

    2015-04-01

    Theories about generalization error with ensembles are mainly based on the diversity concept, which promotes resorting to many members of different properties to support mutually agreeable decisions. Kuncheva (2004) proposed the Multi Level Diversity Model (MLDM) to promote diversity in model ensembles, combining different data subsets, input subsets, models, parameters, and including a combiner level in order to optimize the final ensemble. This work tests the hypothesis about the minimisation of the generalization error with ensembles of Neural Network (NN) structures. We used the MLDM to evaluate two different scenarios: (i) ensembles from a same NN architecture, and (ii) a super-ensemble built by a combination of sub-ensembles of many NN architectures. The time series used correspond to the 12 basins of the MOdel Parameter Estimation eXperiment (MOPEX) project that were used by Duan et al. (2006) and Vos (2013) as benchmark. Six architectures are evaluated: FeedForward NN (FFNN) trained with the Levenberg Marquardt algorithm (Hagan et al., 1996), FFNN trained with SCE (Duan et al., 1993), Recurrent NN trained with a complex method (Weins et al., 2008), Dynamic NARX NN (Leontaritis and Billings, 1985), Echo State Network (ESN), and leak integrator neuron (L-ESN) (Lukosevicius and Jaeger, 2009). Each architecture performs separately an Input Variable Selection (IVS) according to a forward stepwise selection (Anctil et al., 2009) using mean square error as objective function. Post-processing by Predictor Stepwise Selection (PSS) of the super-ensemble has been done following the method proposed by Brochero et al. (2011). IVS results showed that the lagged stream flow, lagged precipitation, and Standardized Precipitation Index (SPI) (McKee et al., 1993) were the most relevant variables. They were respectively selected as one of the firsts three selected variables in 66, 45, and 28 of the 72 scenarios. A relationship between aridity index (Arora, 2002) and NN

  1. A shared, flexible neural map architecture reflects capacity limits in both visual short-term memory and enumeration.

    Science.gov (United States)

    Knops, André; Piazza, Manuela; Sengupta, Rakesh; Eger, Evelyn; Melcher, David

    2014-07-23

    Human cognition is characterized by severe capacity limits: we can accurately track, enumerate, or hold in mind only a small number of items at a time. It remains debated whether capacity limitations across tasks are determined by a common system. Here we measure brain activation of adult subjects performing either a visual short-term memory (vSTM) task consisting of holding in mind precise information about the orientation and position of a variable number of items, or an enumeration task consisting of assessing the number of items in those sets. We show that task-specific capacity limits (three to four items in enumeration and two to three in vSTM) are neurally reflected in the activity of the posterior parietal cortex (PPC): an identical set of voxels in this region, commonly activated during the two tasks, changed its overall response profile reflecting task-specific capacity limitations. These results, replicated in a second experiment, were further supported by multivariate pattern analysis in which we could decode the number of items presented over a larger range during enumeration than during vSTM. Finally, we simulated our results with a computational model of PPC using a saliency map architecture in which the level of mutual inhibition between nodes gives rise to capacity limitations and reflects the task-dependent precision with which objects need to be encoded (high precision for vSTM, lower precision for enumeration). Together, our work supports the existence of a common, flexible system underlying capacity limits across tasks in PPC that may take the form of a saliency map. Copyright © 2014 the authors 0270-6474/14/349857-10$15.00/0.

  2. The Neural Correlates Underlying Belief Reasoning for Self and for Others: Evidence from ERPs.

    Science.gov (United States)

    Jiang, Qin; Wang, Qi; Li, Peng; Li, Hong

    2016-01-01

    Belief reasoning is typical mental state reasoning in theory of mind (ToM). Although previous studies have explored the neural bases of belief reasoning, the neural correlates of belief reasoning for self and for others are rarely addressed. The decoupling mechanism of distinguishing the mental state of others from one's own is essential for ToM processing. To address the electrophysiological bases underlying the decoupling mechanism, the present event-related potential study compared the time course of neural activities associated with belief reasoning for self and for others when the belief belonging to self was consistent or inconsistent with others. Results showed that during a 450-600 ms period, belief reasoning for self elicited a larger late positive component (LPC) than for others when beliefs were inconsistent with each other. The LPC divergence is assumed to reflect the categorization of agencies in ToM processes.

  3. Electrospun Nanofibrous Materials for Neural Tissue Engineering

    Directory of Open Access Journals (Sweden)

    Yee-Shuan Lee

    2011-02-01

    Full Text Available The use of biomaterials processed by the electrospinning technique has gained considerable interest for neural tissue engineering applications. The tissue engineering strategy is to facilitate the regrowth of nerves by combining an appropriate cell type with the electrospun scaffold. Electrospinning can generate fibrous meshes having fiber diameter dimensions at the nanoscale and these fibers can be nonwoven or oriented to facilitate neurite extension via contact guidance. This article reviews studies evaluating the effect of the scaffold’s architectural features such as fiber diameter and orientation on neural cell function and neurite extension. Electrospun meshes made of natural polymers, proteins and compositions having electrical activity in order to enhance neural cell function are also discussed.

  4. Neural network for adapting nuclear power plant control for wide-range operation

    International Nuclear Information System (INIS)

    Ku, C.C.; Lee, K.Y.; Edwards, R.M.

    1991-01-01

    A new concept of using neural networks has been evaluated for optimal control of a nuclear reactor. The neural network uses the architecture of a standard backpropagation network; however, a new dynamic learning algorithm has been developed to capture the underlying system dynamics. The learning algorithm is based on parameter estimation for dynamic systems. The approach is demonstrated on an optimal reactor temperature controller by adjusting the feedback gains for wide-range operation. Application of optimal control to a reactor has been considered for improving temperature response using a robust fifth-order reactor power controller. Conventional gain scheduling can be employed to extend the range of good performance to accommodate large changes in power where nonlinear characteristics significantly modify the dynamics of the power plant. Gain scheduling is developed based on expected parameter variations, and it may be advantageous to further adapt feedback gains on-line to better match actual plant performance. A neural network approach is used here to adapt the gains to better accommodate plant uncertainties and thereby achieve improved robustness characteristics

  5. Spatially Nonlinear Interdependence of Alpha-Oscillatory Neural Networks under Chan Meditation

    Directory of Open Access Journals (Sweden)

    Pei-Chen Lo

    2013-01-01

    Full Text Available This paper reports the results of our investigation of the effects of Chan meditation on brain electrophysiological behaviors from the viewpoint of spatially nonlinear interdependence among regional neural networks. Particular emphasis is laid on the alpha-dominated EEG (electroencephalograph. Continuous-time wavelet transform was adopted to detect the epochs containing substantial alpha activities. Nonlinear interdependence quantified by similarity index S(X∣Y, the influence of source signal Y on sink signal X, was applied to the nonlinear dynamical model in phase space reconstructed from multichannel EEG. Experimental group involved ten experienced Chan-Meditation practitioners, while control group included ten healthy subjects within the same age range, yet, without any meditation experience. Nonlinear interdependence among various cortical regions was explored for five local neural-network regions, frontal, posterior, right-temporal, left-temporal, and central regions. In the experimental group, the inter-regional interaction was evaluated for the brain dynamics under three different stages, at rest (stage R, pre-meditation background recording, in Chan meditation (stage M, and the unique Chakra-focusing practice (stage C. Experimental group exhibits stronger interactions among various local neural networks at stages M and C compared with those at stage R. The intergroup comparison demonstrates that Chan-meditation brain possesses better cortical inter-regional interactions than the resting brain of control group.

  6. Unfolding code for neutron spectrometry based on neural nets technology

    Energy Technology Data Exchange (ETDEWEB)

    Ortiz R, J. M.; Vega C, H. R., E-mail: morvymm@yahoo.com.mx [Universidad Autonoma de Zacatecas, Unidad Academica de Ingenieria Electrica, Apdo. Postal 336, 98000 Zacatecas (Mexico)

    2012-10-15

    The most delicate part of neutron spectrometry, is the unfolding process. The derivation of the spectral information is not simple because the unknown is not given directly as a result of the measurements. The drawbacks associated with traditional unfolding procedures have motivated the need of complementary approaches. Novel methods based on Artificial Neural Networks have been widely investigated. In this work, a neutron spectrum unfolding code based on neural nets technology is presented. This unfolding code called Neutron Spectrometry and Dosimetry by means of Artificial Neural Networks was designed in a graphical interface under LabVIEW programming environment. The core of the code is an embedded neural network architecture, previously optimized by the {sup R}obust Design of Artificial Neural Networks Methodology{sup .} The main features of the code are: is easy to use, friendly and intuitive to the user. This code was designed for a Bonner Sphere System based on a {sup 6}Lil(Eu) neutron detector and a response matrix expressed in 60 energy bins taken from an International Atomic Energy Agency compilation. The main feature of the code is that as entrance data, only seven rate counts measurement with a Bonner spheres spectrometer are required for simultaneously unfold the 60 energy bins of the neutron spectrum and to calculate 15 dosimetric quantities, for radiation protection porpoises. This code generates a full report in html format with all relevant information. (Author)

  7. Neural mechanisms underlying human consensus decision-making.

    Science.gov (United States)

    Suzuki, Shinsuke; Adachi, Ryo; Dunne, Simon; Bossaerts, Peter; O'Doherty, John P

    2015-04-22

    Consensus building in a group is a hallmark of animal societies, yet little is known about its underlying computational and neural mechanisms. Here, we applied a computational framework to behavioral and fMRI data from human participants performing a consensus decision-making task with up to five other participants. We found that participants reached consensus decisions through integrating their own preferences with information about the majority group members' prior choices, as well as inferences about how much each option was stuck to by the other people. These distinct decision variables were separately encoded in distinct brain areas-the ventromedial prefrontal cortex, posterior superior temporal sulcus/temporoparietal junction, and intraparietal sulcus-and were integrated in the dorsal anterior cingulate cortex. Our findings provide support for a theoretical account in which collective decisions are made through integrating multiple types of inference about oneself, others, and environments, processed in distinct brain modules. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Modeling of an industrial drying process by artificial neural networks

    Directory of Open Access Journals (Sweden)

    E. Assidjo

    2008-09-01

    Full Text Available A suitable method is needed to solve the nonquality problem in the grated coconut industry due to the poor control of product humidity during the process. In this study the possibility of using an artificial neural network (ANN, precisely a Multilayer Perceptron, for modeling the drying step of the production of grated coconut process is highlighted. Drying must confer to the product a final moisture of 3%. Unfortunately, under industrial conditions, this moisture varies from 1.9 to 4.8 %. In order to control this parameter and consequently reduce the proportion of the product that does not meet the humidity specification, a 9-4-1 neural network architecture was established using data gathered from an industrial plant. This Multilayer Perceptron can satisfactorily model the process with less bias, ranging from -0.35 to 0.34%, and can reduce the rate of rejected products from 92% to 3% during the first cycle of drying.

  9. Evolutionary neural network modeling for software cumulative failure time prediction

    International Nuclear Information System (INIS)

    Tian Liang; Noore, Afzel

    2005-01-01

    An evolutionary neural network modeling approach for software cumulative failure time prediction based on multiple-delayed-input single-output architecture is proposed. Genetic algorithm is used to globally optimize the number of the delayed input neurons and the number of neurons in the hidden layer of the neural network architecture. Modification of Levenberg-Marquardt algorithm with Bayesian regularization is used to improve the ability to predict software cumulative failure time. The performance of our proposed approach has been compared using real-time control and flight dynamic application data sets. Numerical results show that both the goodness-of-fit and the next-step-predictability of our proposed approach have greater accuracy in predicting software cumulative failure time compared to existing approaches

  10. Architectural design for a low cost FPGA-based traffic signal detection system in vehicles

    Science.gov (United States)

    López, Ignacio; Salvador, Rubén; Alarcón, Jaime; Moreno, Félix

    2007-05-01

    In this paper we propose an architecture for an embedded traffic signal detection system. Development of Advanced Driver Assistance Systems (ADAS) is one of the major trends of research in automotion nowadays. Examples of past and ongoing projects in the field are CHAMELEON ("Pre-Crash Application all around the vehicle" IST 1999-10108), PREVENT (Preventive and Active Safety Applications, FP6-507075, http://www.prevent-ip.org/) and AVRT in the US (Advanced Vision-Radar Threat Detection (AVRT): A Pre-Crash Detection and Active Safety System). It can be observed a major interest in systems for real-time analysis of complex driving scenarios, evaluating risk and anticipating collisions. The system will use a low cost CCD camera on the dashboard facing the road. The images will be processed by an Altera Cyclone family FPGA. The board does median and Sobel filtering of the incoming frames at PAL rate, and analyzes them for several categories of signals. The result is conveyed to the driver. The scarce resources provided by the hardware require an architecture developed for optimal use. The system will use a combination of neural networks and an adapted blackboard architecture. Several neural networks will be used in sequence for image analysis, by reconfiguring a single, generic hardware neural network in the FPGA. This generic network is optimized for speed, in order to admit several executions within the frame rate. The sequence will follow the execution cycle of the blackboard architecture. The global, blackboard architecture being developed and the hardware architecture for the generic, reconfigurable FPGA perceptron will be explained in this paper. The project is still at an early stage. However, some hardware implementation results are already available and will be offered in the paper.

  11. Artificial neural networks for control of a grid-connected rectifier/inverter under disturbance, dynamic and power converter switching conditions.

    Science.gov (United States)

    Li, Shuhui; Fairbank, Michael; Johnson, Cameron; Wunsch, Donald C; Alonso, Eduardo; Proaño, Julio L

    2014-04-01

    Three-phase grid-connected converters are widely used in renewable and electric power system applications. Traditionally, grid-connected converters are controlled with standard decoupled d-q vector control mechanisms. However, recent studies indicate that such mechanisms show limitations in their applicability to dynamic systems. This paper investigates how to mitigate such restrictions using a neural network to control a grid-connected rectifier/inverter. The neural network implements a dynamic programming algorithm and is trained by using back-propagation through time. To enhance performance and stability under disturbance, additional strategies are adopted, including the use of integrals of error signals to the network inputs and the introduction of grid disturbance voltage to the outputs of a well-trained network. The performance of the neural-network controller is studied under typical vector control conditions and compared against conventional vector control methods, which demonstrates that the neural vector control strategy proposed in this paper is effective. Even in dynamic and power converter switching environments, the neural vector controller shows strong ability to trace rapidly changing reference commands, tolerate system disturbances, and satisfy control requirements for a faulted power system.

  12. Neural Mechanisms Underlying the Cost of Task Switching: An ERP Study

    Science.gov (United States)

    Li, Ling; Wang, Meng; Zhao, Qian-Jing; Fogelson, Noa

    2012-01-01

    Background When switching from one task to a new one, reaction times are prolonged. This phenomenon is called switch cost (SC). Researchers have recently used several kinds of task-switching paradigms to uncover neural mechanisms underlying the SC. Task-set reconfiguration and passive dissipation of a previously relevant task-set have been reported to contribute to the cost of task switching. Methodology/Principal Findings An unpredictable cued task-switching paradigm was used, during which subjects were instructed to switch between a color and an orientation discrimination task. Electroencephalography (EEG) and behavioral measures were recorded in 14 subjects. Response-stimulus interval (RSI) and cue-stimulus interval (CSI) were manipulated with short and long intervals, respectively. Switch trials delayed reaction times (RTs) and increased error rates compared with repeat trials. The SC of RTs was smaller in the long CSI condition. For cue-locked waveforms, switch trials generated a larger parietal positive event-related potential (ERP), and a larger slow parietal positivity compared with repeat trials in the short and long CSI condition. Neural SC of cue-related ERP positivity was smaller in the long RSI condition. For stimulus-locked waveforms, a larger switch-related central negative ERP component was observed, and the neural SC of the ERP negativity was smaller in the long CSI. Results of standardized low resolution electromagnetic tomography (sLORETA) for both ERP positivity and negativity showed that switch trials evoked larger activation than repeat trials in dorsolateral prefrontal cortex (DLPFC) and posterior parietal cortex (PPC). Conclusions/Significance The results provide evidence that both RSI and CSI modulate the neural activities in the process of task-switching, but that these have a differential role during task-set reconfiguration and passive dissipation of a previously relevant task-set. PMID:22860090

  13. Neural mechanisms underlying the cost of task switching: an ERP study.

    Directory of Open Access Journals (Sweden)

    Ling Li

    Full Text Available BACKGROUND: When switching from one task to a new one, reaction times are prolonged. This phenomenon is called switch cost (SC. Researchers have recently used several kinds of task-switching paradigms to uncover neural mechanisms underlying the SC. Task-set reconfiguration and passive dissipation of a previously relevant task-set have been reported to contribute to the cost of task switching. METHODOLOGY/PRINCIPAL FINDINGS: An unpredictable cued task-switching paradigm was used, during which subjects were instructed to switch between a color and an orientation discrimination task. Electroencephalography (EEG and behavioral measures were recorded in 14 subjects. Response-stimulus interval (RSI and cue-stimulus interval (CSI were manipulated with short and long intervals, respectively. Switch trials delayed reaction times (RTs and increased error rates compared with repeat trials. The SC of RTs was smaller in the long CSI condition. For cue-locked waveforms, switch trials generated a larger parietal positive event-related potential (ERP, and a larger slow parietal positivity compared with repeat trials in the short and long CSI condition. Neural SC of cue-related ERP positivity was smaller in the long RSI condition. For stimulus-locked waveforms, a larger switch-related central negative ERP component was observed, and the neural SC of the ERP negativity was smaller in the long CSI. Results of standardized low resolution electromagnetic tomography (sLORETA for both ERP positivity and negativity showed that switch trials evoked larger activation than repeat trials in dorsolateral prefrontal cortex (DLPFC and posterior parietal cortex (PPC. CONCLUSIONS/SIGNIFICANCE: The results provide evidence that both RSI and CSI modulate the neural activities in the process of task-switching, but that these have a differential role during task-set reconfiguration and passive dissipation of a previously relevant task-set.

  14. A learning algorithm for oscillatory cellular neural networks.

    Science.gov (United States)

    Ho, C Y.; Kurokawa, H

    1999-07-01

    We present a cellular type oscillatory neural network for temporal segregation of stationary input patterns. The model comprises an array of locally connected neural oscillators with connections limited to a 4-connected neighborhood. The architecture is reminiscent of the well-known cellular neural network that consists of local connection for feature extraction. By means of a novel learning rule and an initialization scheme, global synchronization can be accomplished without incurring any erroneous synchrony among uncorrelated objects. Each oscillator comprises two mutually coupled neurons, and neurons share a piecewise-linear activation function characteristic. The dynamics of traditional oscillatory models is simplified by using only one plastic synapse, and the overall complexity for hardware implementation is reduced. Based on the connectedness of image segments, it is shown that global synchronization and desynchronization can be achieved by means of locally connected synapses, and this opens up a tremendous application potential for the proposed architecture. Furthermore, by using special grouping synapses it is demonstrated that temporal segregation of overlapping gray-level and color segments can also be achieved. Finally, simulation results show that the learning rule proposed circumvents the problem of component mismatches, and hence facilitates a large-scale integration.

  15. The role of automaticity and attention in neural processes underlying empathy for happiness, sadness, and anxiety

    Directory of Open Access Journals (Sweden)

    Sylvia A. Morelli

    2013-05-01

    Full Text Available Although many studies have examined the neural basis of experiencing empathy, relatively little is known about how empathic processes are affected by different attentional conditions. Thus, we examined whether instructions to empathize might amplify responses in empathy-related regions and whether cognitive load would diminish the involvement of these regions. 32 participants completed a functional magnetic resonance imaging session assessing empathic responses to individuals experiencing happy, sad, and anxious events. Stimuli were presented under three conditions: watching naturally, while instructed to empathize, and under cognitive load. Across analyses, we found evidence for a core set of neural regions that support empathic processes (dorsomedial prefrontal cortex, DMPFC; medial prefrontal cortex, MPFC; temporoparietal junction, TPJ; amygdala; ventral anterior insula, AI; septal area, SA. Two key regions – the ventral AI and SA – were consistently active across all attentional conditions, suggesting that they are automatically engaged during empathy. In addition, watching versus empathizing with targets was not markedly different and instead led to similar subjective and neural responses to others’ emotional experiences. In contrast, cognitive load reduced the subjective experience of empathy and diminished neural responses in several regions related to empathy (DMPFC, MPFC, TPJ, amygdala and social cognition. The current results reveal how attention impacts empathic processes and provides insight into how empathy may unfold in everyday interactions.

  16. Hardware Acceleration of Adaptive Neural Algorithms.

    Energy Technology Data Exchange (ETDEWEB)

    James, Conrad D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-11-01

    As tradit ional numerical computing has faced challenges, researchers have turned towards alternative computing approaches to reduce power - per - computation metrics and improve algorithm performance. Here, we describe an approach towards non - conventional computing that strengthens the connection between machine learning and neuroscience concepts. The Hardware Acceleration of Adaptive Neural Algorithms (HAANA) project ha s develop ed neural machine learning algorithms and hardware for applications in image processing and cybersecurity. While machine learning methods are effective at extracting relevant features from many types of data, the effectiveness of these algorithms degrades when subjected to real - world conditions. Our team has generated novel neural - inspired approa ches to improve the resiliency and adaptability of machine learning algorithms. In addition, we have also designed and fabricated hardware architectures and microelectronic devices specifically tuned towards the training and inference operations of neural - inspired algorithms. Finally, our multi - scale simulation framework allows us to assess the impact of microelectronic device properties on algorithm performance.

  17. The architecture of enterprise hospital information system.

    Science.gov (United States)

    Lu, Xudong; Duan, Huilong; Li, Haomin; Zhao, Chenhui; An, Jiye

    2005-01-01

    Because of the complexity of the hospital environment, there exist a lot of medical information systems from different vendors with incompatible structures. In order to establish an enterprise hospital information system, the integration among these heterogeneous systems must be considered. Complete integration should cover three aspects: data integration, function integration and workflow integration. However most of the previous design of architecture did not accomplish such a complete integration. This article offers an architecture design of the enterprise hospital information system based on the concept of digital neural network system in hospital. It covers all three aspects of integration, and eventually achieves the target of one virtual data center with Enterprise Viewer for users of different roles. The initial implementation of the architecture in the 5-year Digital Hospital Project in Huzhou Central hospital of Zhejiang Province is also described.

  18. Intelligent neural network diagnostic system

    International Nuclear Information System (INIS)

    Mohamed, A.H.

    2010-01-01

    Recently, artificial neural network (ANN) has made a significant mark in the domain of diagnostic applications. Neural networks are used to implement complex non-linear mappings (functions) using simple elementary units interrelated through connections with adaptive weights. The performance of the ANN is mainly depending on their topology structure and weights. Some systems have been developed using genetic algorithm (GA) to optimize the topology of the ANN. But, they suffer from some limitations. They are : (1) The computation time requires for training the ANN several time reaching for the average weight required, (2) Slowness of GA for optimization process and (3) Fitness noise appeared in the optimization of ANN. This research suggests new issues to overcome these limitations for finding optimal neural network architectures to learn particular problems. This proposed methodology is used to develop a diagnostic neural network system. It has been applied for a 600 MW turbo-generator as a case of real complex systems. The proposed system has proved its significant performance compared to two common methods used in the diagnostic applications.

  19. A NEURAL NETWORK BASED TRAFFIC-AWARE FORWARDING STRATEGY IN NAMED DATA NETWORKING

    OpenAIRE

    Parisa Bazmi; Manijeh Keshtgary

    2016-01-01

    Named Data Networking (NDN) is a new Internet architecture which has been proposed to eliminate TCP/IP Internet architecture restrictions. This architecture is abstracting away the notion of host and working based on naming datagrams. However, one of the major challenges of NDN is supporting QoS-aware forwarding strategy so as to forward Interest packets intelligently over multiple paths based on the current network condition. In this paper, Neural Network (NN) Based Traffic-aware Forwarding ...

  20. Control Architecture for Intentional Island Operation in Distribution Network with High Penetration of Distributed Generation

    DEFF Research Database (Denmark)

    Chen, Yu

    , the feasibility of the application of Artificial Neural Network (ANN) to ICA is studied, in order to improve the computation efficiency for ISR calculation. Finally, the integration of ICA into Dynamic Security Assessment (DSA), the ICA implementation, and the development of ICA are discussed....... to utilize them for maintaining the security of the power supply under the emergency situations, has been of great interest for study. One proposal is the intentional island operation. This PhD project is intended to develop a control architecture for the island operation in distribution system with high...... amount of DGs. As part of the NextGen project, this project focuses on the system modeling and simulation regarding the control architecture and recommends the development of a communication and information exchange system based on IEC 61850. This thesis starts with the background of this PhD project...

  1. Exporting Humanist Architecture

    DEFF Research Database (Denmark)

    Nielsen, Tom

    2016-01-01

    The article is a chapter in the catalogue for the Danish exhibition at the 2016 Architecture Biennale in Venice. The catalogue is conceived at an independent book exploring the theme Art of Many - The Right to Space. The chapter is an essay in this anthology tracing and discussing the different...... values and ethical stands involved in the export of Danish Architecture. Abstract: Danish architecture has, in a sense, been driven by an unwritten contract between the architects and the democratic state and its institutions. This contract may be viewed as an ethos – an architectural tradition...... with inherent aesthetic and moral values. Today, however, Danish architecture is also an export commodity. That raises questions, which should be debated as openly as possible. What does it mean for architecture and architects to practice in cultures and under political systems that do not use architecture...

  2. Classification of non-performing loans portfolio using Multilayer Perceptron artificial neural networks

    Directory of Open Access Journals (Sweden)

    Flávio Clésio Silva de Souza

    2014-06-01

    Full Text Available The purpose of the present research is to apply a Multilayer Perceptron (MLP neural network technique to create classification models from a portfolio of Non-Performing Loans (NPLs to classify this type of credit derivative. These credit derivatives are characterized as the amount of loans that were not paid and are already overdue more than 90 days. Since these titles are, because of legislative motives, moved by losses, Credit Rights Investment Funds (FDIC performs the purchase of these debts and the recovery of the credits. Using the Multilayer Perceptron (MLP architecture of Artificial Neural Network (ANN, classification models regarding the posterior recovery of these debts were created. To evaluate the performance of the models, evaluation metrics of classification relating to the neural networks with different architectures were presented. The results of the classifications were satisfactory, given the classification models were successful in the presented economics costs structure.

  3. Hearing loss impacts neural alpha oscillations under adverse listening conditions

    Directory of Open Access Journals (Sweden)

    Eline Borch Petersen

    2015-02-01

    Full Text Available Degradations in external, acoustic stimulation have long been suspected to increase the load on working memory. One neural signature of working memory load is enhanced power of alpha oscillations (6 ‒ 12 Hz. However, it is unknown to what extent common internal, auditory degradation, that is, hearing impairment, affects the neural mechanisms of working memory when audibility has been ensured via amplification. Using an adapted auditory Sternberg paradigm, we varied the orthogonal factors memory load and background noise level, while the electroencephalogram (EEG was recorded. In each trial, participants were presented with 2, 4, or 6 spoken digits embedded in one of three different levels of background noise. After a stimulus-free delay interval, participants indicated whether a probe digit had appeared in the sequence of digits. Participants were healthy older adults (62 – 86 years, with normal to moderately impaired hearing. Importantly, the background noise levels were individually adjusted and participants were wearing hearing aids to equalize audibility across participants. Irrespective of hearing loss, behavioral performance improved with lower memory load and also with lower levels of background noise. Interestingly, the alpha power in the stimulus-free delay interval was dependent on the interplay between task demands (memory load and noise level and hearing loss; while alpha power increased with hearing loss during low and intermediate levels of memory load and background noise, it dropped for participants with the relatively most severe hearing loss under the highest memory load and background noise level. These findings suggest that adaptive neural mechanisms for coping with adverse listening conditions break down for higher degrees of hearing loss, even when adequate hearing aid amplification is in place.

  4. Evaluation of a deep learning architecture for MR imaging prediction of ATRX in glioma patients

    Science.gov (United States)

    Korfiatis, Panagiotis; Kline, Timothy L.; Erickson, Bradley J.

    2018-02-01

    Predicting mutation/loss of alpha-thalassemia/mental retardation syndrome X-linked (ATRX) gene utilizing MR imaging is of high importance since it is a predictor of response and prognosis in brain tumors. In this study, we compare a deep neural network approach based on a residual deep neural network (ResNet) architecture and one based on a classical machine learning approach and evaluate their ability in predicting ATRX mutation status without the need for a distinct tumor segmentation step. We found that the ResNet50 (50 layers) architecture, pre trained on ImageNet data was the best performing model, achieving an accuracy of 0.91 for the test set (classification of a slice as no tumor, ATRX mutated, or mutated) in terms of f1 score in a test set of 35 cases. The SVM classifier achieved 0.63 for differentiating the Flair signal abnormality regions from the test patients based on their mutation status. We report a method that alleviates the need for extensive preprocessing and acts as a proof of concept that deep neural network architectures can be used to predict molecular biomarkers from routine medical images.

  5. δ-Catenin Regulates Spine Architecture via Cadherin and PDZ-dependent Interactions*

    Science.gov (United States)

    Yuan, Li; Seong, Eunju; Beuscher, James L.; Arikkath, Jyothi

    2015-01-01

    The ability of neurons to maintain spine architecture and modulate it in response to synaptic activity is a crucial component of the cellular machinery that underlies information storage in pyramidal neurons of the hippocampus. Here we show a critical role for δ-catenin, a component of the cadherin-catenin cell adhesion complex, in regulating spine head width and length in pyramidal neurons of the hippocampus. The loss of Ctnnd2, the gene encoding δ-catenin, has been associated with the intellectual disability observed in the cri du chat syndrome, suggesting that the functional roles of δ-catenin are vital for neuronal integrity and higher order functions. We demonstrate that loss of δ-catenin in a mouse model or knockdown of δ-catenin in pyramidal neurons compromises spine head width and length, without altering spine dynamics. This is accompanied by a reduction in the levels of synaptic N-cadherin. The ability of δ-catenin to modulate spine architecture is critically dependent on its ability to interact with cadherin and PDZ domain-containing proteins. We propose that loss of δ-catenin during development perturbs synaptic architecture leading to developmental aberrations in neural circuit formation that contribute to the learning disabilities in a mouse model and humans with cri du chat syndrome. PMID:25724647

  6. Study Under AC Stimulation on Excitement Properties of Weighted Small-World Biological Neural Networks with Side-Restrain Mechanism

    International Nuclear Information System (INIS)

    Yuan Wujie; Luo Xiaoshu; Jiang Pinqun

    2007-01-01

    In this paper, we propose a new model of weighted small-world biological neural networks based on biophysical Hodgkin-Huxley neurons with side-restrain mechanism. Then we study excitement properties of the model under alternating current (AC) stimulation. The study shows that the excitement properties in the networks are preferably consistent with the behavior properties of a brain nervous system under different AC stimuli, such as refractory period and the brain neural excitement response induced by different intensities of noise and coupling. The results of the study have reference worthiness for the brain nerve electrophysiology and epistemological science.

  7. Fluid and flexible minds: Intelligence reflects synchrony in the brain’s intrinsic network architecture

    Directory of Open Access Journals (Sweden)

    Michael A. Ferguson

    2017-06-01

    Full Text Available Human intelligence has been conceptualized as a complex system of dissociable cognitive processes, yet studies investigating the neural basis of intelligence have typically emphasized the contributions of discrete brain regions or, more recently, of specific networks of functionally connected regions. Here we take a broader, systems perspective in order to investigate whether intelligence is an emergent property of synchrony within the brain’s intrinsic network architecture. Using a large sample of resting-state fMRI and cognitive data (n = 830, we report that the synchrony of functional interactions within and across distributed brain networks reliably predicts fluid and flexible intellectual functioning. By adopting a whole-brain, systems-level approach, we were able to reliably predict individual differences in human intelligence by characterizing features of the brain’s intrinsic network architecture. These findings hold promise for the eventual development of neural markers to predict changes in intellectual function that are associated with neurodevelopment, normal aging, and brain disease. In our study, we aimed to understand how individual differences in intellectual functioning are reflected in the intrinsic network architecture of the human brain. We applied statistical methods, known as spectral decompositions, in order to identify individual differences in the synchronous patterns of spontaneous brain activity that reliably predict core aspects of human intelligence. The synchrony of brain activity at rest across multiple discrete neural networks demonstrated positive relationships with fluid intelligence. In contrast, global synchrony within the brain’s network architecture reliably, and inversely, predicted mental flexibility, a core facet of intellectual functioning. The multinetwork systems approach described here represents a methodological and conceptual extension of earlier efforts that related differences in

  8. Evaluating deep learning architectures for Speech Emotion Recognition.

    Science.gov (United States)

    Fayek, Haytham M; Lech, Margaret; Cavedon, Lawrence

    2017-08-01

    Speech Emotion Recognition (SER) can be regarded as a static or dynamic classification problem, which makes SER an excellent test bed for investigating and comparing various deep learning architectures. We describe a frame-based formulation to SER that relies on minimal speech processing and end-to-end deep learning to model intra-utterance dynamics. We use the proposed SER system to empirically explore feed-forward and recurrent neural network architectures and their variants. Experiments conducted illuminate the advantages and limitations of these architectures in paralinguistic speech recognition and emotion recognition in particular. As a result of our exploration, we report state-of-the-art results on the IEMOCAP database for speaker-independent SER and present quantitative and qualitative assessments of the models' performances. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Poor Consumer Comprehension and Plan Selection Inconsistencies Under the 2016 Choice Architecture

    Directory of Open Access Journals (Sweden)

    Annabel Z. Wang BA

    2017-06-01

    Full Text Available Background: Many health policy experts have endorsed insurance competition as a way to reduce the cost and improve the quality of medical care. In line with this approach, health insurance exchanges, such as HealthCare.gov , allow consumers to compare insurance plans online. Since the 2013 rollout of HealthCare.gov , administrators have added features intended to help consumers better understand and compare insurance plans. Although well-intentioned, changes to exchange websites affect the context in which consumers view plans, or choice architecture, which may impede their ability to choose plans that best fit their needs at the lowest cost. Methods: By simulating the 2016 HealthCare.gov enrollment experience in an online sample of 374 American adults, we examined comprehension and choice of HealthCare.gov plans under its choice architecture. Results: We found room for improvement in plan comprehension, with higher rates of misunderstanding among participants with poor math skills ( P 0.9. Limitations: Participants were drawn from a general population sample. The study does not assess for all possible plan choice influencers, such as provider networks, brand recognition, or help from others. Conclusions: Our findings suggest two areas of improvement for exchanges: first, the remaining gap in consumer plan comprehension and, second, the apparent influence of sorting order—and likely other choice architecture elements—on plan choice. Our findings inform strategies for exchange administrators to help consumers understand and select plans that better fit their needs.

  10. Supervised Learning with Complex-valued Neural Networks

    CERN Document Server

    Suresh, Sundaram; Savitha, Ramasamy

    2013-01-01

    Recent advancements in the field of telecommunications, medical imaging and signal processing deal with signals that are inherently time varying, nonlinear and complex-valued. The time varying, nonlinear characteristics of these signals can be effectively analyzed using artificial neural networks.  Furthermore, to efficiently preserve the physical characteristics of these complex-valued signals, it is important to develop complex-valued neural networks and derive their learning algorithms to represent these signals at every step of the learning process. This monograph comprises a collection of new supervised learning algorithms along with novel architectures for complex-valued neural networks. The concepts of meta-cognition equipped with a self-regulated learning have been known to be the best human learning strategy. In this monograph, the principles of meta-cognition have been introduced for complex-valued neural networks in both the batch and sequential learning modes. For applications where the computati...

  11. Recognition of sign language gestures using neural networks

    OpenAIRE

    Simon Vamplew

    2007-01-01

    This paper describes the structure and performance of the SLARTI sign language recognition system developed at the University of Tasmania. SLARTI uses a modular architecture consisting of multiple feature-recognition neural networks and a nearest-neighbour classifier to recognise Australian sign language (Auslan) hand gestures.

  12. Neural systems language: a formal modeling language for the systematic description, unambiguous communication, and automated digital curation of neural connectivity.

    Science.gov (United States)

    Brown, Ramsay A; Swanson, Larry W

    2013-09-01

    Systematic description and the unambiguous communication of findings and models remain among the unresolved fundamental challenges in systems neuroscience. No common descriptive frameworks exist to describe systematically the connective architecture of the nervous system, even at the grossest level of observation. Furthermore, the accelerating volume of novel data generated on neural connectivity outpaces the rate at which this data is curated into neuroinformatics databases to synthesize digitally systems-level insights from disjointed reports and observations. To help address these challenges, we propose the Neural Systems Language (NSyL). NSyL is a modeling language to be used by investigators to encode and communicate systematically reports of neural connectivity from neuroanatomy and brain imaging. NSyL engenders systematic description and communication of connectivity irrespective of the animal taxon described, experimental or observational technique implemented, or nomenclature referenced. As a language, NSyL is internally consistent, concise, and comprehensible to both humans and computers. NSyL is a promising development for systematizing the representation of neural architecture, effectively managing the increasing volume of data on neural connectivity and streamlining systems neuroscience research. Here we present similar precedent systems, how NSyL extends existing frameworks, and the reasoning behind NSyL's development. We explore NSyL's potential for balancing robustness and consistency in representation by encoding previously reported assertions of connectivity from the literature as examples. Finally, we propose and discuss the implications of a framework for how NSyL will be digitally implemented in the future to streamline curation of experimental results and bridge the gaps among anatomists, imagers, and neuroinformatics databases. Copyright © 2013 Wiley Periodicals, Inc.

  13. A neural network based seafloor classification using acoustic backscatter

    Digital Repository Service at National Institute of Oceanography (India)

    Chakraborty, B.

    This paper presents a study results of the Artificial Neural Network (ANN) architectures [Self-Organizing Map (SOM) and Multi-Layer Perceptron (MLP)] using single beam echosounding data. The single beam echosounder, operable at 12 kHz, has been used...

  14. Random noise effects in pulse-mode digital multilayer neural networks.

    Science.gov (United States)

    Kim, Y C; Shanblatt, M A

    1995-01-01

    A pulse-mode digital multilayer neural network (DMNN) based on stochastic computing techniques is implemented with simple logic gates as basic computing elements. The pulse-mode signal representation and the use of simple logic gates for neural operations lead to a massively parallel yet compact and flexible network architecture, well suited for VLSI implementation. Algebraic neural operations are replaced by stochastic processes using pseudorandom pulse sequences. The distributions of the results from the stochastic processes are approximated using the hypergeometric distribution. Synaptic weights and neuron states are represented as probabilities and estimated as average pulse occurrence rates in corresponding pulse sequences. A statistical model of the noise (error) is developed to estimate the relative accuracy associated with stochastic computing in terms of mean and variance. Computational differences are then explained by comparison to deterministic neural computations. DMNN feedforward architectures are modeled in VHDL using character recognition problems as testbeds. Computational accuracy is analyzed, and the results of the statistical model are compared with the actual simulation results. Experiments show that the calculations performed in the DMNN are more accurate than those anticipated when Bernoulli sequences are assumed, as is common in the literature. Furthermore, the statistical model successfully predicts the accuracy of the operations performed in the DMNN.

  15. Application of adaptive boosting to EP-derived multilayer feed-forward neural networks (MLFN) to improve benign/malignant breast cancer classification

    Science.gov (United States)

    Land, Walker H., Jr.; Masters, Timothy D.; Lo, Joseph Y.; McKee, Dan

    2001-07-01

    A new neural network technology was developed for improving the benign/malignant diagnosis of breast cancer using mammogram findings. A new paradigm, Adaptive Boosting (AB), uses a markedly different theory in solutioning Computational Intelligence (CI) problems. AB, a new machine learning paradigm, focuses on finding weak learning algorithm(s) that initially need to provide slightly better than random performance (i.e., approximately 55%) when processing a mammogram training set. Then, by successive development of additional architectures (using the mammogram training set), the adaptive boosting process improves the performance of the basic Evolutionary Programming derived neural network architectures. The results of these several EP-derived hybrid architectures are then intelligently combined and tested using a similar validation mammogram data set. Optimization focused on improving specificity and positive predictive value at very high sensitivities, where an analysis of the performance of the hybrid would be most meaningful. Using the DUKE mammogram database of 500 biopsy proven samples, on average this hybrid was able to achieve (under statistical 5-fold cross-validation) a specificity of 48.3% and a positive predictive value (PPV) of 51.8% while maintaining 100% sensitivity. At 97% sensitivity, a specificity of 56.6% and a PPV of 55.8% were obtained.

  16. Anger under control: neural correlates of frustration as a function of trait aggression.

    Directory of Open Access Journals (Sweden)

    Christina M Pawliczek

    Full Text Available Antisocial behavior and aggression are prominent symptoms in several psychiatric disorders including antisocial personality disorder. An established precursor to aggression is a frustrating event, which can elicit anger or exasperation, thereby prompting aggressive responses. While some studies have investigated the neural correlates of frustration and aggression, examination of their relation to trait aggression in healthy populations are rare. Based on a screening of 550 males, we formed two extreme groups, one including individuals reporting high (n=21 and one reporting low (n=18 trait aggression. Using functional magnetic resonance imaging (fMRI at 3T, all participants were put through a frustration task comprising unsolvable anagrams of German nouns. Despite similar behavioral performance, males with high trait aggression reported higher ratings of negative affect and anger after the frustration task. Moreover, they showed relatively decreased activation in the frontal brain regions and the dorsal anterior cingulate cortex (dACC as well as relatively less amygdala activation in response to frustration. Our findings indicate distinct frontal and limbic processing mechanisms following frustration modulated by trait aggression. In response to a frustrating event, HA individuals show some of the personality characteristics and neural processing patterns observed in abnormally aggressive populations. Highlighting the impact of aggressive traits on the behavioral and neural responses to frustration in non-psychiatric extreme groups can facilitate further characterization of neural dysfunctions underlying psychiatric disorders that involve abnormal frustration processing and aggression.

  17. Grid Architecture 2

    Energy Technology Data Exchange (ETDEWEB)

    Taft, Jeffrey D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-01-01

    The report describes work done on Grid Architecture under the auspices of the Department of Electricity Office of Electricity Delivery and Reliability in 2015. As described in the first Grid Architecture report, the primary purpose of this work is to provide stakeholder insight about grid issues so as to enable superior decision making on their part. Doing this requires the creation of various work products, including oft-times complex diagrams, analyses, and explanations. This report provides architectural insights into several important grid topics and also describes work done to advance the science of Grid Architecture as well.

  18. Advanced parallel processing with supercomputer architectures

    International Nuclear Information System (INIS)

    Hwang, K.

    1987-01-01

    This paper investigates advanced parallel processing techniques and innovative hardware/software architectures that can be applied to boost the performance of supercomputers. Critical issues on architectural choices, parallel languages, compiling techniques, resource management, concurrency control, programming environment, parallel algorithms, and performance enhancement methods are examined and the best answers are presented. The authors cover advanced processing techniques suitable for supercomputers, high-end mainframes, minisupers, and array processors. The coverage emphasizes vectorization, multitasking, multiprocessing, and distributed computing. In order to achieve these operation modes, parallel languages, smart compilers, synchronization mechanisms, load balancing methods, mapping parallel algorithms, operating system functions, application library, and multidiscipline interactions are investigated to ensure high performance. At the end, they assess the potentials of optical and neural technologies for developing future supercomputers

  19. Patterns recognition of electric brain activity using artificial neural networks

    Science.gov (United States)

    Musatov, V. Yu.; Pchelintseva, S. V.; Runnova, A. E.; Hramov, A. E.

    2017-04-01

    An approach for the recognition of various cognitive processes in the brain activity in the perception of ambiguous images. On the basis of developed theoretical background and the experimental data, we propose a new classification of oscillating patterns in the human EEG by using an artificial neural network approach. After learning of the artificial neural network reliably identified cube recognition processes, for example, left-handed or right-oriented Necker cube with different intensity of their edges, construct an artificial neural network based on Perceptron architecture and demonstrate its effectiveness in the pattern recognition of the EEG in the experimental.

  20. Advanced neural network-based computational schemes for robust fault diagnosis

    CERN Document Server

    Mrugalski, Marcin

    2014-01-01

    The present book is devoted to problems of adaptation of artificial neural networks to robust fault diagnosis schemes. It presents neural networks-based modelling and estimation techniques used for designing robust fault diagnosis schemes for non-linear dynamic systems. A part of the book focuses on fundamental issues such as architectures of dynamic neural networks, methods for designing of neural networks and fault diagnosis schemes as well as the importance of robustness. The book is of a tutorial value and can be perceived as a good starting point for the new-comers to this field. The book is also devoted to advanced schemes of description of neural model uncertainty. In particular, the methods of computation of neural networks uncertainty with robust parameter estimation are presented. Moreover, a novel approach for system identification with the state-space GMDH neural network is delivered. All the concepts described in this book are illustrated by both simple academic illustrative examples and practica...

  1. Brain tumor segmentation with Deep Neural Networks.

    Science.gov (United States)

    Havaei, Mohammad; Davy, Axel; Warde-Farley, David; Biard, Antoine; Courville, Aaron; Bengio, Yoshua; Pal, Chris; Jodoin, Pierre-Marc; Larochelle, Hugo

    2017-01-01

    In this paper, we present a fully automatic brain tumor segmentation method based on Deep Neural Networks (DNNs). The proposed networks are tailored to glioblastomas (both low and high grade) pictured in MR images. By their very nature, these tumors can appear anywhere in the brain and have almost any kind of shape, size, and contrast. These reasons motivate our exploration of a machine learning solution that exploits a flexible, high capacity DNN while being extremely efficient. Here, we give a description of different model choices that we've found to be necessary for obtaining competitive performance. We explore in particular different architectures based on Convolutional Neural Networks (CNN), i.e. DNNs specifically adapted to image data. We present a novel CNN architecture which differs from those traditionally used in computer vision. Our CNN exploits both local features as well as more global contextual features simultaneously. Also, different from most traditional uses of CNNs, our networks use a final layer that is a convolutional implementation of a fully connected layer which allows a 40 fold speed up. We also describe a 2-phase training procedure that allows us to tackle difficulties related to the imbalance of tumor labels. Finally, we explore a cascade architecture in which the output of a basic CNN is treated as an additional source of information for a subsequent CNN. Results reported on the 2013 BRATS test data-set reveal that our architecture improves over the currently published state-of-the-art while being over 30 times faster. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. A Basic Architecture of an Autonomous Adaptive System With Conscious-Like Function for a Humanoid Robot

    Directory of Open Access Journals (Sweden)

    Yasuo Kinouchi

    2018-04-01

    Full Text Available In developing a humanoid robot, there are two major objectives. One is developing a physical robot having body, hands, and feet resembling those of human beings and being able to similarly control them. The other is to develop a control system that works similarly to our brain, to feel, think, act, and learn like ours. In this article, an architecture of a control system with a brain-oriented logical structure for the second objective is proposed. The proposed system autonomously adapts to the environment and implements a clearly defined “consciousness” function, through which both habitual behavior and goal-directed behavior are realized. Consciousness is regarded as a function for effective adaptation at the system-level, based on matching and organizing the individual results of the underlying parallel-processing units. This consciousness is assumed to correspond to how our mind is “aware” when making our moment to moment decisions in our daily life. The binding problem and the basic causes of delay in Libet’s experiment are also explained by capturing awareness in this manner. The goal is set as an image in the system, and efficient actions toward achieving this goal are selected in the goal-directed behavior process. The system is designed as an artificial neural network and aims at achieving consistent and efficient system behavior, through the interaction of highly independent neural nodes. The proposed architecture is based on a two-level design. The first level, which we call the “basic-system,” is an artificial neural network system that realizes consciousness, habitual behavior and explains the binding problem. The second level, which we call the “extended-system,” is an artificial neural network system that realizes goal-directed behavior.

  3. Neural computations underlying social risk sensitivity

    Directory of Open Access Journals (Sweden)

    Nina eLauharatanahirun

    2012-08-01

    Full Text Available Under standard models of expected utility, preferences over stochastic events are assumed to be independent of the source of uncertainty. Thus, in decision-making, an agent should exhibit consistent preferences, regardless of whether the uncertainty derives from the unpredictability of a random process or the unpredictability of a social partner. However, when a social partner is the source of uncertainty, social preferences can influence decisions over and above pure risk attitudes. Here, we compared risk-related hemodynamic activity and individual preferences for two sets of options that differ only in the social or non-social nature of the risk. Risk preferences in social and non-social contexts were systematically related to neural activity during decision and outcome phases of each choice. Individuals who were more risk averse in the social context exhibited decreased risk-related activity in the amygdala during non-social decisions, while individuals who were more risk averse in the non-social context exhibited the opposite pattern. Differential risk preferences were similarly associated with hemodynamic activity in ventral striatum at the outcome of these decisions. These findings suggest that social preferences, including aversion to betrayal or exploitation by social partners, may be associated with variability in the response of these subcortical regions to social risk.

  4. Artificial neural network intelligent method for prediction

    Science.gov (United States)

    Trifonov, Roumen; Yoshinov, Radoslav; Pavlova, Galya; Tsochev, Georgi

    2017-09-01

    Accounting and financial classification and prediction problems are high challenge and researchers use different methods to solve them. Methods and instruments for short time prediction of financial operations using artificial neural network are considered. The methods, used for prediction of financial data as well as the developed forecasting system with neural network are described in the paper. The architecture of a neural network used four different technical indicators, which are based on the raw data and the current day of the week is presented. The network developed is used for forecasting movement of stock prices one day ahead and consists of an input layer, one hidden layer and an output layer. The training method is algorithm with back propagation of the error. The main advantage of the developed system is self-determination of the optimal topology of neural network, due to which it becomes flexible and more precise The proposed system with neural network is universal and can be applied to various financial instruments using only basic technical indicators as input data.

  5. A customizable stochastic state point process filter (SSPPF) for neural spiking activity.

    Science.gov (United States)

    Xin, Yao; Li, Will X Y; Min, Biao; Han, Yan; Cheung, Ray C C

    2013-01-01

    Stochastic State Point Process Filter (SSPPF) is effective for adaptive signal processing. In particular, it has been successfully applied to neural signal coding/decoding in recent years. Recent work has proven its efficiency in non-parametric coefficients tracking in modeling of mammal nervous system. However, existing SSPPF has only been realized in commercial software platforms which limit their computational capability. In this paper, the first hardware architecture of SSPPF has been designed and successfully implemented on field-programmable gate array (FPGA), proving a more efficient means for coefficient tracking in a well-established generalized Laguerre-Volterra model for mammalian hippocampal spiking activity research. By exploring the intrinsic parallelism of the FPGA, the proposed architecture is able to process matrices or vectors with random size, and is efficiently scalable. Experimental result shows its superior performance comparing to the software implementation, while maintaining the numerical precision. This architecture can also be potentially utilized in the future hippocampal cognitive neural prosthesis design.

  6. Adaptive neural network motion control for aircraft under uncertainty conditions

    Science.gov (United States)

    Efremov, A. V.; Tiaglik, M. S.; Tiumentsev, Yu V.

    2018-02-01

    We need to provide motion control of modern and advanced aircraft under diverse uncertainty conditions. This problem can be solved by using adaptive control laws. We carry out an analysis of the capabilities of these laws for such adaptive systems as MRAC (Model Reference Adaptive Control) and MPC (Model Predictive Control). In the case of a nonlinear control object, the most efficient solution to the adaptive control problem is the use of neural network technologies. These technologies are suitable for the development of both a control object model and a control law for the object. The approximate nature of the ANN model was taken into account by introducing additional compensating feedback into the control system. The capabilities of adaptive control laws under uncertainty in the source data are considered. We also conduct simulations to assess the contribution of adaptivity to the behavior of the system.

  7. Human Face Recognition Using Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Răzvan-Daniel Albu

    2009-10-01

    Full Text Available In this paper, I present a novel hybrid face recognition approach based on a convolutional neural architecture, designed to robustly detect highly variable face patterns. The convolutional network extracts successively larger features in a hierarchical set of layers. With the weights of the trained neural networks there are created kernel windows used for feature extraction in a 3-stage algorithm. I present experimental results illustrating the efficiency of the proposed approach. I use a database of 796 images of 159 individuals from Reims University which contains quite a high degree of variability in expression, pose, and facial details.

  8. On-Line Tracking Controller for Brushless DC Motor Drives Using Artificial Neural Networks

    Science.gov (United States)

    Rubaai, Ahmed

    1996-01-01

    A real-time control architecture is developed for time-varying nonlinear brushless dc motors operating in a high performance drives environment. The developed control architecture possesses the capabilities of simultaneous on-line identification and control. The dynamics of the motor are modeled on-line and controlled using an artificial neural network, as the system runs. The control architecture combines the experience and dependability of adaptive tracking systems with potential and promise of the neural computing technology. The sensitivity of real-time controller to parametric changes that occur during training is investigated. Such changes are usually manifested by rapid changes in the load of the brushless motor drives. This sudden change in the external load is simulated for the sigmoidal and sinusoidal reference tracks. The ability of the neuro-controller to maintain reasonable tracking accuracy in the presence of external noise is also verified for a number of desired reference trajectories.

  9. Neural networks for perception human and machine perception

    CERN Document Server

    Wechsler, Harry

    1991-01-01

    Neural Networks for Perception, Volume 1: Human and Machine Perception focuses on models for understanding human perception in terms of distributed computation and examples of PDP models for machine perception. This book addresses both theoretical and practical issues related to the feasibility of both explaining human perception and implementing machine perception in terms of neural network models. The book is organized into two parts. The first part focuses on human perception. Topics on network model ofobject recognition in human vision, the self-organization of functional architecture in t

  10. Formal Models of the Network Co-occurrence Underlying Mental Operations.

    Science.gov (United States)

    Bzdok, Danilo; Varoquaux, Gaël; Grisel, Olivier; Eickenberg, Michael; Poupon, Cyril; Thirion, Bertrand

    2016-06-01

    Systems neuroscience has identified a set of canonical large-scale networks in humans. These have predominantly been characterized by resting-state analyses of the task-unconstrained, mind-wandering brain. Their explicit relationship to defined task performance is largely unknown and remains challenging. The present work contributes a multivariate statistical learning approach that can extract the major brain networks and quantify their configuration during various psychological tasks. The method is validated in two extensive datasets (n = 500 and n = 81) by model-based generation of synthetic activity maps from recombination of shared network topographies. To study a use case, we formally revisited the poorly understood difference between neural activity underlying idling versus goal-directed behavior. We demonstrate that task-specific neural activity patterns can be explained by plausible combinations of resting-state networks. The possibility of decomposing a mental task into the relative contributions of major brain networks, the "network co-occurrence architecture" of a given task, opens an alternative access to the neural substrates of human cognition.

  11. A Grey Wolf Optimizer for Modular Granular Neural Networks for Human Recognition

    Directory of Open Access Journals (Sweden)

    Daniela Sánchez

    2017-01-01

    Full Text Available A grey wolf optimizer for modular neural network (MNN with a granular approach is proposed. The proposed method performs optimal granulation of data and design of modular neural networks architectures to perform human recognition, and to prove its effectiveness benchmark databases of ear, iris, and face biometric measures are used to perform tests and comparisons against other works. The design of a modular granular neural network (MGNN consists in finding optimal parameters of its architecture; these parameters are the number of subgranules, percentage of data for the training phase, learning algorithm, goal error, number of hidden layers, and their number of neurons. Nowadays, there is a great variety of approaches and new techniques within the evolutionary computing area, and these approaches and techniques have emerged to help find optimal solutions to problems or models and bioinspired algorithms are part of this area. In this work a grey wolf optimizer is proposed for the design of modular granular neural networks, and the results are compared against a genetic algorithm and a firefly algorithm in order to know which of these techniques provides better results when applied to human recognition.

  12. High speed VLSI neural network for high energy physics

    NARCIS (Netherlands)

    Masa, P.; Masa, P.; Hoen, K.; Hoen, Klaas; Wallinga, Hans

    1994-01-01

    A CMOS neural network IC is discussed which was designed for very high speed applications. The parallel architecture, analog computing and digital weight storage provides unprecedented computing speed combined with ease of use. The circuit classifies up to 70 dimensional vectors within 20

  13. Phonematic translation of Polish texts by the neural network

    International Nuclear Information System (INIS)

    Bielecki, A.; Podolak, I.T.; Wosiek, J.; Majkut, E.

    1996-01-01

    Using the back propagation algorithm, we have trained the feed forward neural network to pronounce Polish language, more precisely to translate Polish text into its phonematic counterpart. Depending on the input coding and network architecture, 88%-95% translation efficiency was achieved. (author)

  14. Raingauge-Based Rainfall Nowcasting with Artificial Neural Network

    Science.gov (United States)

    Liong, Shie-Yui; He, Shan

    2010-05-01

    Rainfall forecasting and nowcasting are of great importance, for instance, in real-time flood early warning systems. Long term rainfall forecasting demands global climate, land, and sea data, thus, large computing power and storage capacity are required. Rainfall nowcasting's computing requirement, on the other hand, is much less. Rainfall nowcasting may use data captured by radar and/or weather stations. This paper presents the application of Artificial Neural Network (ANN) on rainfall nowcasting using data observed at weather and/or rainfall stations. The study focuses on the North-East monsoon period (December, January and February) in Singapore. Rainfall and weather data from ten stations, between 2000 and 2006, were selected and divided into three groups for training, over-fitting test and validation of the ANN. Several neural network architectures were tried in the study. Two architectures, Backpropagation ANN and Group Method of Data Handling ANN, yielded better rainfall nowcasting, up to two hours, than the other architectures. The obtained rainfall nowcasts were then used by a catchment model to forecast catchment runoff. The results of runoff forecast are encouraging and promising.With ANN's high computational speed, the proposed approach may be deliverable for creating the real-time flood early warning system.

  15. Recognition of sign language gestures using neural networks

    Directory of Open Access Journals (Sweden)

    Simon Vamplew

    2007-04-01

    Full Text Available This paper describes the structure and performance of the SLARTI sign language recognition system developed at the University of Tasmania. SLARTI uses a modular architecture consisting of multiple feature-recognition neural networks and a nearest-neighbour classifier to recognise Australian sign language (Auslan hand gestures.

  16. vDNN: Virtualized Deep Neural Networks for Scalable, Memory-Efficient Neural Network Design

    OpenAIRE

    Rhu, Minsoo; Gimelshein, Natalia; Clemons, Jason; Zulfiqar, Arslan; Keckler, Stephen W.

    2016-01-01

    The most widely used machine learning frameworks require users to carefully tune their memory usage so that the deep neural network (DNN) fits into the DRAM capacity of a GPU. This restriction hampers a researcher's flexibility to study different machine learning algorithms, forcing them to either use a less desirable network architecture or parallelize the processing across multiple GPUs. We propose a runtime memory manager that virtualizes the memory usage of DNNs such that both GPU and CPU...

  17. Improving the Robustness of Deep Neural Networks via Stability Training

    OpenAIRE

    Zheng, Stephan; Song, Yang; Leung, Thomas; Goodfellow, Ian

    2016-01-01

    In this paper we address the issue of output instability of deep neural networks: small perturbations in the visual input can significantly distort the feature embeddings and output of a neural network. Such instability affects many deep architectures with state-of-the-art performance on a wide range of computer vision tasks. We present a general stability training method to stabilize deep networks against small input distortions that result from various types of common image processing, such...

  18. Ideomotor feedback control in a recurrent neural network.

    Science.gov (United States)

    Galtier, Mathieu

    2015-06-01

    The architecture of a neural network controlling an unknown environment is presented. It is based on a randomly connected recurrent neural network from which both perception and action are simultaneously read and fed back. There are two concurrent learning rules implementing a sort of ideomotor control: (i) perception is learned along the principle that the network should predict reliably its incoming stimuli; (ii) action is learned along the principle that the prediction of the network should match a target time series. The coherent behavior of the neural network in its environment is a consequence of the interaction between the two principles. Numerical simulations show a promising performance of the approach, which can be turned into a local and better "biologically plausible" algorithm.

  19. Neural-Network Quantum States, String-Bond States, and Chiral Topological States

    Science.gov (United States)

    Glasser, Ivan; Pancotti, Nicola; August, Moritz; Rodriguez, Ivan D.; Cirac, J. Ignacio

    2018-01-01

    Neural-network quantum states have recently been introduced as an Ansatz for describing the wave function of quantum many-body systems. We show that there are strong connections between neural-network quantum states in the form of restricted Boltzmann machines and some classes of tensor-network states in arbitrary dimensions. In particular, we demonstrate that short-range restricted Boltzmann machines are entangled plaquette states, while fully connected restricted Boltzmann machines are string-bond states with a nonlocal geometry and low bond dimension. These results shed light on the underlying architecture of restricted Boltzmann machines and their efficiency at representing many-body quantum states. String-bond states also provide a generic way of enhancing the power of neural-network quantum states and a natural generalization to systems with larger local Hilbert space. We compare the advantages and drawbacks of these different classes of states and present a method to combine them together. This allows us to benefit from both the entanglement structure of tensor networks and the efficiency of neural-network quantum states into a single Ansatz capable of targeting the wave function of strongly correlated systems. While it remains a challenge to describe states with chiral topological order using traditional tensor networks, we show that, because of their nonlocal geometry, neural-network quantum states and their string-bond-state extension can describe a lattice fractional quantum Hall state exactly. In addition, we provide numerical evidence that neural-network quantum states can approximate a chiral spin liquid with better accuracy than entangled plaquette states and local string-bond states. Our results demonstrate the efficiency of neural networks to describe complex quantum wave functions and pave the way towards the use of string-bond states as a tool in more traditional machine-learning applications.

  20. Noradrenergic modulation of neural erotic stimulus perception.

    Science.gov (United States)

    Graf, Heiko; Wiegers, Maike; Metzger, Coraline Danielle; Walter, Martin; Grön, Georg; Abler, Birgit

    2017-09-01

    We recently investigated neuromodulatory effects of the noradrenergic agent reboxetine and the dopamine receptor affine amisulpride in healthy subjects on dynamic erotic stimulus processing. Whereas amisulpride left sexual functions and neural activations unimpaired, we observed detrimental activations under reboxetine within the caudate nucleus corresponding to motivational components of sexual behavior. However, broadly impaired subjective sexual functioning under reboxetine suggested effects on further neural components. We now investigated the same sample under these two agents with static erotic picture stimulation as alternative stimulus presentation mode to potentially observe further neural treatment effects of reboxetine. 19 healthy males were investigated under reboxetine, amisulpride and placebo for 7 days each within a double-blind cross-over design. During fMRI static erotic picture were presented with preceding anticipation periods. Subjective sexual functions were assessed by a self-reported questionnaire. Neural activations were attenuated within the caudate nucleus, putamen, ventral striatum, the pregenual and anterior midcingulate cortex and in the orbitofrontal cortex under reboxetine. Subjective diminished sexual arousal under reboxetine was correlated with attenuated neural reactivity within the posterior insula. Again, amisulpride left neural activations along with subjective sexual functioning unimpaired. Neither reboxetine nor amisulpride altered differential neural activations during anticipation of erotic stimuli. Our results verified detrimental effects of noradrenergic agents on neural motivational but also emotional and autonomic components of sexual behavior. Considering the overlap of neural network alterations with those evoked by serotonergic agents, our results suggest similar neuromodulatory effects of serotonergic and noradrenergic agents on common neural pathways relevant for sexual behavior. Copyright © 2017 Elsevier B.V. and

  1. On design and evaluation of tapped-delay neural network architectures

    DEFF Research Database (Denmark)

    Svarer, Claus; Hansen, Lars Kai; Larsen, Jan

    1993-01-01

    Pruning and evaluation of tapped-delay neural networks for the sunspot benchmark series are addressed. It is shown that the generalization ability of the networks can be improved by pruning using the optimal brain damage method of Le Cun, Denker and Solla. A stop criterion for the pruning algorithm...

  2. Meta-Key: A Secure Data-Sharing Protocol under Blockchain-Based Decentralised Storage Architecture

    OpenAIRE

    Fu, Yue

    2017-01-01

    In this paper a secure data-sharing protocol under blockchain-based decentralised storage architecture is proposed, which fulfils users who need to share their encrypted data on-cloud. It implements a remote data-sharing mechanism that enables data owners to share their encrypted data to other users without revealing the original key. Nor do they have to download on-cloud data with re-encryption and re-uploading. Data security as well as efficiency are ensured by symmetric encryption, whose k...

  3. Nonlinear adaptive inverse control via the unified model neural network

    Science.gov (United States)

    Jeng, Jin-Tsong; Lee, Tsu-Tian

    1999-03-01

    In this paper, we propose a new nonlinear adaptive inverse control via a unified model neural network. In order to overcome nonsystematic design and long training time in nonlinear adaptive inverse control, we propose the approximate transformable technique to obtain a Chebyshev Polynomials Based Unified Model (CPBUM) neural network for the feedforward/recurrent neural networks. It turns out that the proposed method can use less training time to get an inverse model. Finally, we apply this proposed method to control magnetic bearing system. The experimental results show that the proposed nonlinear adaptive inverse control architecture provides a greater flexibility and better performance in controlling magnetic bearing systems.

  4. Automatic target recognition using a feature-based optical neural network

    Science.gov (United States)

    Chao, Tien-Hsin

    1992-01-01

    An optical neural network based upon the Neocognitron paradigm (K. Fukushima et al. 1983) is introduced. A novel aspect of the architectural design is shift-invariant multichannel Fourier optical correlation within each processing layer. Multilayer processing is achieved by iteratively feeding back the output of the feature correlator to the input spatial light modulator and updating the Fourier filters. By training the neural net with characteristic features extracted from the target images, successful pattern recognition with intra-class fault tolerance and inter-class discrimination is achieved. A detailed system description is provided. Experimental demonstration of a two-layer neural network for space objects discrimination is also presented.

  5. Shapes and geometries underlying the religious architecture in the 18th century

    Directory of Open Access Journals (Sweden)

    Sebastiano Giuliano

    2015-07-01

    of great renovation, known as the” Sicilian Baroque”.The second part of this research makes use of a very accurate graphic analysis aiming at understanding the sizing and proportioning methodologies which were used in the design phase.Sizes and proportions are essential to understand the work in its overall shape; they also make comparisons possible, regardless the sculptural and decorative apparatus and its architectural shape.The discovery  of the underlying geometrical and reference patterns allows the researchers to make some hypothesis on the proportions of the project, even if there are no drawings.; the main goal is to understand the origin of the project so as to identify the simple pattern to which all probable similarities or differences can be referred. Therefore the geometrical analysis is the way through which it is possible to study the design method of valuable works in Eastern Sicily.As altars are integral parts of churches, "actual architectures inside the architecture", expression of the complexity and the conformative dynamism of the baroque architecture, this research is based on these valuable elements, whose intermingling of shapes, functions and meanings, leads, from a figurative point of view, to some constructions which are very similar to facades.If the project design is a graphic-historical document telling about the geometrical apparatus of the architectural element in its formal and symbolic relationships, the survey carried out through advanced technologies is fundamental in the backwards survey.The comparison with the works of the age and the graphic analysis of the architectures on both small and big scales, tries to reveal and give back all those relationships which are at the origin of the design project and which have granted their conveyance to the different dimensional levels and their disclosure in  relatively distant areas.

  6. Deep Galaxy: Classification of Galaxies based on Deep Convolutional Neural Networks

    OpenAIRE

    Khalifa, Nour Eldeen M.; Taha, Mohamed Hamed N.; Hassanien, Aboul Ella; Selim, I. M.

    2017-01-01

    In this paper, a deep convolutional neural network architecture for galaxies classification is presented. The galaxy can be classified based on its features into main three categories Elliptical, Spiral, and Irregular. The proposed deep galaxies architecture consists of 8 layers, one main convolutional layer for features extraction with 96 filters, followed by two principles fully connected layers for classification. It is trained over 1356 images and achieved 97.272% in testing accuracy. A c...

  7. SuperNeurons: Dynamic GPU Memory Management for Training Deep Neural Networks

    OpenAIRE

    Wang, Linnan; Ye, Jinmian; Zhao, Yiyang; Wu, Wei; Li, Ang; Song, Shuaiwen Leon; Xu, Zenglin; Kraska, Tim

    2018-01-01

    Going deeper and wider in neural architectures improves the accuracy, while the limited GPU DRAM places an undesired restriction on the network design domain. Deep Learning (DL) practitioners either need change to less desired network architectures, or nontrivially dissect a network across multiGPUs. These distract DL practitioners from concentrating on their original machine learning tasks. We present SuperNeurons: a dynamic GPU memory scheduling runtime to enable the network training far be...

  8. Algorithms and architectures of artificial intelligence

    CERN Document Server

    Tyugu, E

    2007-01-01

    This book gives an overview of methods developed in artificial intelligence for search, learning, problem solving and decision-making. It gives an overview of algorithms and architectures of artificial intelligence that have reached the degree of maturity when a method can be presented as an algorithm, or when a well-defined architecture is known, e.g. in neural nets and intelligent agents. It can be used as a handbook for a wide audience of application developers who are interested in using artificial intelligence methods in their software products. Parts of the text are rather independent, so that one can look into the index and go directly to a description of a method presented in the form of an abstract algorithm or an architectural solution. The book can be used also as a textbook for a course in applied artificial intelligence. Exercises on the subject are added at the end of each chapter. Neither programming skills nor specific knowledge in computer science are expected from the reader. However, some p...

  9. Seafloor classification using acoustic backscatter echo-waveform - Artificial neural network applications

    Digital Repository Service at National Institute of Oceanography (India)

    Chakraborty, B.; Mahale, V.; Navelkar, G.S.; Desai, R.G.P.

    In this paper seafloor classifications system based on artificial neural network (ANN) has been designed. The ANN architecture employed here is a combination of Self Organizing Feature Map (SOFM) and Linear Vector Quantization (LVQ1). Currently...

  10. Neural mechanisms underlying transcranial direct current stimulation in aphasia: A feasibility study.

    Directory of Open Access Journals (Sweden)

    Lena eUlm

    2015-10-01

    Full Text Available Little is known about the neural mechanisms by which transcranial direct current stimulation (tDCS impacts on language processing in post-stroke aphasia. This was addressed in a proof-of-principle study that explored the effects of tDCS application in aphasia during simultaneous functional magnetic resonance imaging (fMRI. We employed a single subject, cross-over, sham-tDCS controlled design and the stimulation was administered to an individualized perilesional stimulation site that was identified by a baseline fMRI scan and a picture naming task. Peak activity during the baseline scan was located in the spared left inferior frontal gyrus (IFG and this area was stimulated during a subsequent cross-over phase. tDCS was successfully administered to the target region and anodal- vs. sham-tDCS resulted in selectively increased activity at the stimulation site. Our results thus demonstrate that it is feasible to precisely target an individualized stimulation site in aphasia patients during simultaneous fMRI which allows assessing the neural mechanisms underlying tDCS application. The functional imaging results of this case report highlight one possible mechanism that may have contributed to beneficial behavioural stimulation effects in previous clinical tDCS trials in aphasia. In the future, this approach will allow identifying distinct patterns of stimulation effects on neural processing in larger cohorts of patients. This may ultimately yield information about the variability of tDCS-effects on brain functions in aphasia.

  11. δ-Catenin Regulates Spine Architecture via Cadherin and PDZ-dependent Interactions.

    Science.gov (United States)

    Yuan, Li; Seong, Eunju; Beuscher, James L; Arikkath, Jyothi

    2015-04-24

    The ability of neurons to maintain spine architecture and modulate it in response to synaptic activity is a crucial component of the cellular machinery that underlies information storage in pyramidal neurons of the hippocampus. Here we show a critical role for δ-catenin, a component of the cadherin-catenin cell adhesion complex, in regulating spine head width and length in pyramidal neurons of the hippocampus. The loss of Ctnnd2, the gene encoding δ-catenin, has been associated with the intellectual disability observed in the cri du chat syndrome, suggesting that the functional roles of δ-catenin are vital for neuronal integrity and higher order functions. We demonstrate that loss of δ-catenin in a mouse model or knockdown of δ-catenin in pyramidal neurons compromises spine head width and length, without altering spine dynamics. This is accompanied by a reduction in the levels of synaptic N-cadherin. The ability of δ-catenin to modulate spine architecture is critically dependent on its ability to interact with cadherin and PDZ domain-containing proteins. We propose that loss of δ-catenin during development perturbs synaptic architecture leading to developmental aberrations in neural circuit formation that contribute to the learning disabilities in a mouse model and humans with cri du chat syndrome. © 2015 by The American Society for Biochemistry and Molecular Biology, Inc.

  12. Comparative Study of Neural Network Frameworks for the Next Generation of Adaptive Optics Systems.

    Science.gov (United States)

    González-Gutiérrez, Carlos; Santos, Jesús Daniel; Martínez-Zarzuela, Mario; Basden, Alistair G; Osborn, James; Díaz-Pernas, Francisco Javier; De Cos Juez, Francisco Javier

    2017-06-02

    Many of the next generation of adaptive optics systems on large and extremely large telescopes require tomographic techniques in order to correct for atmospheric turbulence over a large field of view. Multi-object adaptive optics is one such technique. In this paper, different implementations of a tomographic reconstructor based on a machine learning architecture named "CARMEN" are presented. Basic concepts of adaptive optics are introduced first, with a short explanation of three different control systems used on real telescopes and the sensors utilised. The operation of the reconstructor, along with the three neural network frameworks used, and the developed CUDA code are detailed. Changes to the size of the reconstructor influence the training and execution time of the neural network. The native CUDA code turns out to be the best choice for all the systems, although some of the other frameworks offer good performance under certain circumstances.

  13. A TLD dose algorithm using artificial neural networks

    International Nuclear Information System (INIS)

    Moscovitch, M.; Rotunda, J.E.; Tawil, R.A.; Rathbone, B.A.

    1995-01-01

    An artificial neural network was designed and used to develop a dose algorithm for a multi-element thermoluminescence dosimeter (TLD). The neural network architecture is based on the concept of functional links network (FLN). Neural network is an information processing method inspired by the biological nervous system. A dose algorithm based on neural networks is fundamentally different as compared to conventional algorithms, as it has the capability to learn from its own experience. The neural network algorithm is shown the expected dose values (output) associated with given responses of a multi-element dosimeter (input) many times. The algorithm, being trained that way, eventually is capable to produce its own unique solution to similar (but not exactly the same) dose calculation problems. For personal dosimetry, the output consists of the desired dose components: deep dose, shallow dose and eye dose. The input consists of the TL data obtained from the readout of a multi-element dosimeter. The neural network approach was applied to the Harshaw Type 8825 TLD, and was shown to significantly improve the performance of this dosimeter, well within the U.S. accreditation requirements for personnel dosimeters

  14. Ontogeny of neural circuits underlying spatial memory in the rat

    Directory of Open Access Journals (Sweden)

    James Alexander Ainge

    2012-03-01

    Full Text Available Spatial memory is a well characterised psychological function in both humans and rodents. The combined computations of a network of systems including place cells in the hippocampus, grid cells in the medial entorhinal cortex and head direction cells found in numerous structures in the brain have been suggested to form the neural instantiation of the cognitive map as first described by Tolman in 1948. However, while our understanding of the neural mechanisms underlying spatial representations in adults is relatively sophisticated, we know substantially less about how this network develops in young animals. In this article we review studies examining the developmental timescale that these systems follow. Electrophysiological recordings from very young rats show that directional information is at adult levels at the outset of navigational experience. The systems supporting allocentric memory, however, take longer to mature. This is consistent with behavioural studies of young rats which show that spatial memory based on head direction develops very early but that allocentric spatial memory takes longer to mature. We go on to report new data demonstrating that memory for associations between objects and their spatial locations is slower to develop than memory for objects alone. This is again consistent with previous reports suggesting that adult like spatial representations have a protracted development in rats and also suggests that the systems involved in processing non-spatial stimuli come online earlier.

  15. Evaluation of CNN architectures for gait recognition based on optical flow maps

    OpenAIRE

    Castro, F. M.; Marín-Jiménez, M.J.; Guil, N.; López-Tapia, S.; Pérez de la Blanca, N.

    2017-01-01

    This work targets people identification in video based on the way they walk (\\ie gait) by using deep learning architectures. We explore the use of convolutional neural networks (CNN) for learning high-level descriptors from low-level motion features (\\ie optical flow components). The low number of training samples for each subject and the use of a test set containing subjects different from the training ones makes the search of a good CNN architecture a challenging task. Universidad de Mál...

  16. Prediction of Aerodynamic Coefficient using Genetic Algorithm Optimized Neural Network for Sparse Data

    Science.gov (United States)

    Rajkumar, T.; Bardina, Jorge; Clancy, Daniel (Technical Monitor)

    2002-01-01

    coefficients to an accuracy of 110% . In our problem, we would like to get an optimized neural network architecture and minimum data set. This has been accomplished within 500 training cycles of a neural network. After removing training pairs (outliers), the GA has produced much better results. The neural network constructed is a feed forward neural network with a back propagation learning mechanism. The main goal has been to free the network design process from constraints of human biases, and to discover better forms of neural network architectures. The automation of the network architecture search by genetic algorithms seems to have been the best way to achieve this goal.

  17. Mixed Analog/Digital Matrix-Vector Multiplier for Neural Network Synapses

    DEFF Research Database (Denmark)

    Lehmann, Torsten; Bruun, Erik; Dietrich, Casper

    1996-01-01

    In this work we present a hardware efficient matrix-vector multiplier architecture for artificial neural networks with digitally stored synapse strengths. We present a novel technique for manipulating bipolar inputs based on an analog two's complements method and an accurate current rectifier...

  18. A neural circuit for angular velocity computation

    Directory of Open Access Journals (Sweden)

    Samuel B Snider

    2010-12-01

    Full Text Available In one of the most remarkable feats of motor control in the animal world, some Diptera, such as the housefly, can accurately execute corrective flight maneuvers in tens of milliseconds. These reflexive movements are achieved by the halteres, gyroscopic force sensors, in conjunction with rapidly-tunable wing-steering muscles. Specifically, the mechanosensory campaniform sensilla located at the base of the halteres transduce and transform rotation-induced gyroscopic forces into information about the angular velocity of the fly's body. But how exactly does the fly's neural architecture generate the angular velocity from the lateral strain forces on the left and right halteres? To explore potential algorithms, we built a neuro-mechanical model of the rotation detection circuit. We propose a neurobiologically plausible method by which the fly could accurately separate and measure the three-dimensional components of an imposed angular velocity. Our model assumes a single sign-inverting synapse and formally resembles some models of directional selectivity by the retina. Using multidimensional error analysis, we demonstrate the robustness of our model under a variety of input conditions. Our analysis reveals the maximum information available to the fly given its physical architecture and the mathematics governing the rotation-induced forces at the haltere's end knob.

  19. Regional cerebral glucose metabolic changes in oculopalatal myoclonus: implication for neural pathways, underlying the disorder

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Sang Soo; Moon, So Young; Kim, Ji Soo; Kim, Sang Eun [College of Medicine, Seoul National University, Seoul (Korea, Republic of)

    2004-07-01

    Palatal myoclonus (PM) is characterized by rhythmic involuntary jerky movements of the soft palate of the throat. When associated with eye movements, it is called oculopalatal myoclonus (OPM). Ordinary PM is characterized by hypertrophic olivary degeneration, a trans-synaptic degeneration following loss of neuronal input to the inferior olivary nucleus due to an interruption of the Guillain-Mollaret triangle usually by a hemorrhage. However, the neural pathways underlying the disorder are uncertain. In an attempt to understand the pathologic neural pathways, we examined the metabolic correlates of this tremulous condition. Brain FDG PET scans were acquired in 8 patients with OPM (age, 49.9{+-}4.6 y: all males: 7 with pontine hemorrhage, 1 with diffuse brainstem infarction) and age-matched 50 healthy males (age, 50.7{+-} 9.0) and the regional glucose metabolism compared using SPM99. For group analysis, the hemispheres containing lesions were assigned to the right side of the brain. Patients with OPM had significant hypometabolism in the ipsilateral (to the lesion) brainstem and superior temporal and parahippocampal gyri (P < 0.05 corrected, k = 100). By contrast, there was significant hypermetabolism in the contralateral middle and inferior temporal gyri, thalamus, middle frontal gyrus and precuneus (P < 0.05 corrected, k=l00). Our data demonstrate the distinct metabolic changes between several ipsilateral and contralateral brain regions (hypometabolism vs. hypermetabolism) in patients with OPM. This may provide clues for understanding the neural pathways underlying the disorder.

  20. Regional cerebral glucose metabolic changes in oculopalatal myoclonus: implication for neural pathways, underlying the disorder

    International Nuclear Information System (INIS)

    Cho, Sang Soo; Moon, So Young; Kim, Ji Soo; Kim, Sang Eun

    2004-01-01

    Palatal myoclonus (PM) is characterized by rhythmic involuntary jerky movements of the soft palate of the throat. When associated with eye movements, it is called oculopalatal myoclonus (OPM). Ordinary PM is characterized by hypertrophic olivary degeneration, a trans-synaptic degeneration following loss of neuronal input to the inferior olivary nucleus due to an interruption of the Guillain-Mollaret triangle usually by a hemorrhage. However, the neural pathways underlying the disorder are uncertain. In an attempt to understand the pathologic neural pathways, we examined the metabolic correlates of this tremulous condition. Brain FDG PET scans were acquired in 8 patients with OPM (age, 49.9±4.6 y: all males: 7 with pontine hemorrhage, 1 with diffuse brainstem infarction) and age-matched 50 healthy males (age, 50.7± 9.0) and the regional glucose metabolism compared using SPM99. For group analysis, the hemispheres containing lesions were assigned to the right side of the brain. Patients with OPM had significant hypometabolism in the ipsilateral (to the lesion) brainstem and superior temporal and parahippocampal gyri (P < 0.05 corrected, k = 100). By contrast, there was significant hypermetabolism in the contralateral middle and inferior temporal gyri, thalamus, middle frontal gyrus and precuneus (P < 0.05 corrected, k=l00). Our data demonstrate the distinct metabolic changes between several ipsilateral and contralateral brain regions (hypometabolism vs. hypermetabolism) in patients with OPM. This may provide clues for understanding the neural pathways underlying the disorder

  1. Enhanced biocompatibility of neural probes by integrating microstructures and delivering anti-inflammatory agents via microfluidic channels

    Science.gov (United States)

    Liu, Bin; Kim, Eric; Meggo, Anika; Gandhi, Sachin; Luo, Hao; Kallakuri, Srinivas; Xu, Yong; Zhang, Jinsheng

    2017-04-01

    Objective. Biocompatibility is a major issue for chronic neural implants, involving inflammatory and wound healing responses of neurons and glial cells. To enhance biocompatibility, we developed silicon-parylene hybrid neural probes with open architecture electrodes, microfluidic channels and a reservoir for drug delivery to suppress tissue responses. Approach. We chronically implanted our neural probes in the rat auditory cortex and investigated (1) whether open architecture electrode reduces inflammatory reaction by measuring glial responses; and (2) whether delivery of antibiotic minocycline reduces inflammatory and tissue reaction. Four weeks after implantation, immunostaining for glial fibrillary acid protein (astrocyte marker) and ionizing calcium-binding adaptor molecule 1 (macrophages/microglia cell marker) were conducted to identify immunoreactive astrocyte and microglial cells, and to determine the extent of astrocytes and microglial cell reaction/activation. A comparison was made between using traditional solid-surface electrodes and newly-designed electrodes with open architecture, as well as between deliveries of minocycline and artificial cerebral-spinal fluid diffused through microfluidic channels. Main results. The new probes with integrated micro-structures induced minimal tissue reaction compared to traditional electrodes at 4 weeks after implantation. Microcycline delivered through integrated microfluidic channels reduced tissue response as indicated by decreased microglial reaction around the neural probes implanted. Significance. The new design will help enhance the long-term stability of the implantable devices.

  2. Deep Neural Network-Based Chinese Semantic Role Labeling

    Institute of Scientific and Technical Information of China (English)

    ZHENG Xiaoqing; CHEN Jun; SHANG Guoqiang

    2017-01-01

    A recent trend in machine learning is to use deep architec-tures to discover multiple levels of features from data, which has achieved impressive results on various natural language processing (NLP) tasks. We propose a deep neural network-based solution to Chinese semantic role labeling (SRL) with its application on message analysis. The solution adopts a six-step strategy: text normalization, named entity recognition (NER), Chinese word segmentation and part-of-speech (POS) tagging, theme classification, SRL, and slot filling. For each step, a novel deep neural network - based model is designed and optimized, particularly for smart phone applications. Ex-periment results on all the NLP sub - tasks of the solution show that the proposed neural networks achieve state-of-the-art performance with the minimal computational cost. The speed advantage of deep neural networks makes them more competitive for large-scale applications or applications requir-ing real-time response, highlighting the potential of the pro-posed solution for practical NLP systems.

  3. NSDann2BS, a neutron spectrum unfolding code based on neural networks technology and two bonner spheres

    Energy Technology Data Exchange (ETDEWEB)

    Ortiz-Rodriguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.; Solis Sanches, L. O.; Miranda, R. Castaneda; Cervantes Viramontes, J. M. [Universidad Autonoma de Zacatecas, Unidad Academica de Ingenieria Electrica. Av. Ramon Lopez Velarde 801. Col. Centro Zacatecas, Zac (Mexico); Vega-Carrillo, H. R. [Universidad Autonoma de Zacatecas, Unidad Academica de Ingenieria Electrica. Av. Ramon Lopez Velarde 801. Col. Centro Zacatecas, Zac., Mexico. and Unidad Academica de Estudios Nucleares. C. Cip (Mexico)

    2013-07-03

    In this work a neutron spectrum unfolding code, based on artificial intelligence technology is presented. The code called ''Neutron Spectrometry and Dosimetry with Artificial Neural Networks and two Bonner spheres'', (NSDann2BS), was designed in a graphical user interface under the LabVIEW programming environment. The main features of this code are to use an embedded artificial neural network architecture optimized with the ''Robust design of artificial neural networks methodology'' and to use two Bonner spheres as the only piece of information. In order to build the code here presented, once the net topology was optimized and properly trained, knowledge stored at synaptic weights was extracted and using a graphical framework build on the LabVIEW programming environment, the NSDann2BS code was designed. This code is friendly, intuitive and easy to use for the end user. The code is freely available upon request to authors. To demonstrate the use of the neural net embedded in the NSDann2BS code, the rate counts of {sup 252}Cf, {sup 241}AmBe and {sup 239}PuBe neutron sources measured with a Bonner spheres system.

  4. NSDann2BS, a neutron spectrum unfolding code based on neural networks technology and two bonner spheres

    International Nuclear Information System (INIS)

    Ortiz-Rodríguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.; Solís Sánches, L. O.; Miranda, R. Castañeda; Cervantes Viramontes, J. M.; Vega-Carrillo, H. R.

    2013-01-01

    In this work a neutron spectrum unfolding code, based on artificial intelligence technology is presented. The code called ''Neutron Spectrometry and Dosimetry with Artificial Neural Networks and two Bonner spheres'', (NSDann2BS), was designed in a graphical user interface under the LabVIEW programming environment. The main features of this code are to use an embedded artificial neural network architecture optimized with the ''Robust design of artificial neural networks methodology'' and to use two Bonner spheres as the only piece of information. In order to build the code here presented, once the net topology was optimized and properly trained, knowledge stored at synaptic weights was extracted and using a graphical framework build on the LabVIEW programming environment, the NSDann2BS code was designed. This code is friendly, intuitive and easy to use for the end user. The code is freely available upon request to authors. To demonstrate the use of the neural net embedded in the NSDann2BS code, the rate counts of 252 Cf, 241 AmBe and 239 PuBe neutron sources measured with a Bonner spheres system

  5. NSDann2BS, a neutron spectrum unfolding code based on neural networks technology and two bonner spheres

    Science.gov (United States)

    Ortiz-Rodríguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.; Solís Sánches, L. O.; Miranda, R. Castañeda; Cervantes Viramontes, J. M.; Vega-Carrillo, H. R.

    2013-07-01

    In this work a neutron spectrum unfolding code, based on artificial intelligence technology is presented. The code called "Neutron Spectrometry and Dosimetry with Artificial Neural Networks and two Bonner spheres", (NSDann2BS), was designed in a graphical user interface under the LabVIEW programming environment. The main features of this code are to use an embedded artificial neural network architecture optimized with the "Robust design of artificial neural networks methodology" and to use two Bonner spheres as the only piece of information. In order to build the code here presented, once the net topology was optimized and properly trained, knowledge stored at synaptic weights was extracted and using a graphical framework build on the LabVIEW programming environment, the NSDann2BS code was designed. This code is friendly, intuitive and easy to use for the end user. The code is freely available upon request to authors. To demonstrate the use of the neural net embedded in the NSDann2BS code, the rate counts of 252Cf, 241AmBe and 239PuBe neutron sources measured with a Bonner spheres system.

  6. Probabilistic Models and Generative Neural Networks: Towards an Unified Framework for Modeling Normal and Impaired Neurocognitive Functions.

    Science.gov (United States)

    Testolin, Alberto; Zorzi, Marco

    2016-01-01

    Connectionist models can be characterized within the more general framework of probabilistic graphical models, which allow to efficiently describe complex statistical distributions involving a large number of interacting variables. This integration allows building more realistic computational models of cognitive functions, which more faithfully reflect the underlying neural mechanisms at the same time providing a useful bridge to higher-level descriptions in terms of Bayesian computations. Here we discuss a powerful class of graphical models that can be implemented as stochastic, generative neural networks. These models overcome many limitations associated with classic connectionist models, for example by exploiting unsupervised learning in hierarchical architectures (deep networks) and by taking into account top-down, predictive processing supported by feedback loops. We review some recent cognitive models based on generative networks, and we point out promising research directions to investigate neuropsychological disorders within this approach. Though further efforts are required in order to fill the gap between structured Bayesian models and more realistic, biophysical models of neuronal dynamics, we argue that generative neural networks have the potential to bridge these levels of analysis, thereby improving our understanding of the neural bases of cognition and of pathologies caused by brain damage.

  7. Learning from neural control.

    Science.gov (United States)

    Wang, Cong; Hill, David J

    2006-01-01

    One of the amazing successes of biological systems is their ability to "learn by doing" and so adapt to their environment. In this paper, first, a deterministic learning mechanism is presented, by which an appropriately designed adaptive neural controller is capable of learning closed-loop system dynamics during tracking control to a periodic reference orbit. Among various neural network (NN) architectures, the localized radial basis function (RBF) network is employed. A property of persistence of excitation (PE) for RBF networks is established, and a partial PE condition of closed-loop signals, i.e., the PE condition of a regression subvector constructed out of the RBFs along a periodic state trajectory, is proven to be satisfied. Accurate NN approximation for closed-loop system dynamics is achieved in a local region along the periodic state trajectory, and a learning ability is implemented during a closed-loop feedback control process. Second, based on the deterministic learning mechanism, a neural learning control scheme is proposed which can effectively recall and reuse the learned knowledge to achieve closed-loop stability and improved control performance. The significance of this paper is that the presented deterministic learning mechanism and the neural learning control scheme provide elementary components toward the development of a biologically-plausible learning and control methodology. Simulation studies are included to demonstrate the effectiveness of the approach.

  8. Exploiting Hidden Layer Responses of Deep Neural Networks for Language Recognition

    Science.gov (United States)

    2016-09-08

    Target Languages Arabic (ara) Egyptian , Iraqi, Levantine, Maghrebi,Modern Standard Chinese (chi) Cantonese, Mandarin, Min, Wu English (eng) British...Frame-by-frame DNN classification x1 x2 x3 xT-­1xT Figure 1: Frame-by-frame DNN Language Identification Figure 1 shows the architecture of the DNN...compare direct DNN system with proposed DNN I-vector system, we trained a single neural network to classify all 20 languages. The architecture of this

  9. Neural Networks and Their Applications for the Oil Industry Les réseaux neuronaux et leurs applications pour l'industrie pétrolière

    Directory of Open Access Journals (Sweden)

    Fogelman-Soulie F.

    2006-11-01

    Full Text Available Neural Networks can be used in many different areas of problems related to Petroleum Exploration and Production. There already exist well defined classes of applications, together with appropriate Neural Networks architectures. Detailed theoretical results allow to monitor and evaluate the results obtained by Neural Networks. Sophisticated applications will certainly require the use of multi-modular architectures. Les réseaux neuronaux peuvent être utilisés pour de nombreux problèmes dans les domaines de l'exploration et la production de pétrole. Il existe d'ores et déjà des classes d'applications bien définies, pour lesquelles on connaît les architectures neuronales les plus adaptées. Des résultats théoriques précis permettent de contrôler et d'évaluer les performances obtenues avec les réseaux neuronaux. Les applications complexes demanderont certainement la mise en oeuvre d'architectures multi-modulaires.

  10. On the use of a pruning prior for neural networks

    DEFF Research Database (Denmark)

    Goutte, Cyril

    1996-01-01

    We address the problem of using a regularization prior that prunes unnecessary weights in a neural network architecture. This prior provides a convenient alternative to traditional weight-decay. Two examples are studied to support this method and illustrate its use. First we use the sunspots...

  11. Neural networks to predict exosphere temperature corrections

    Science.gov (United States)

    Choury, Anna; Bruinsma, Sean; Schaeffer, Philippe

    2013-10-01

    Precise orbit prediction requires a forecast of the atmospheric drag force with a high degree of accuracy. Artificial neural networks are universal approximators derived from artificial intelligence and are widely used for prediction. This paper presents a method of artificial neural networking for prediction of the thermosphere density by forecasting exospheric temperature, which will be used by the semiempirical thermosphere Drag Temperature Model (DTM) currently developed. Artificial neural network has shown to be an effective and robust forecasting model for temperature prediction. The proposed model can be used for any mission from which temperature can be deduced accurately, i.e., it does not require specific training. Although the primary goal of the study was to create a model for 1 day ahead forecast, the proposed architecture has been generalized to 2 and 3 days prediction as well. The impact of artificial neural network predictions has been quantified for the low-orbiting satellite Gravity Field and Steady-State Ocean Circulation Explorer in 2011, and an order of magnitude smaller orbit errors were found when compared with orbits propagated using the thermosphere model DTM2009.

  12. Artificial neural network based modelling approach for municipal solid waste gasification in a fluidized bed reactor.

    Science.gov (United States)

    Pandey, Daya Shankar; Das, Saptarshi; Pan, Indranil; Leahy, James J; Kwapinski, Witold

    2016-12-01

    In this paper, multi-layer feed forward neural networks are used to predict the lower heating value of gas (LHV), lower heating value of gasification products including tars and entrained char (LHV p ) and syngas yield during gasification of municipal solid waste (MSW) during gasification in a fluidized bed reactor. These artificial neural networks (ANNs) with different architectures are trained using the Levenberg-Marquardt (LM) back-propagation algorithm and a cross validation is also performed to ensure that the results generalise to other unseen datasets. A rigorous study is carried out on optimally choosing the number of hidden layers, number of neurons in the hidden layer and activation function in a network using multiple Monte Carlo runs. Nine input and three output parameters are used to train and test various neural network architectures in both multiple output and single output prediction paradigms using the available experimental datasets. The model selection procedure is carried out to ascertain the best network architecture in terms of predictive accuracy. The simulation results show that the ANN based methodology is a viable alternative which can be used to predict the performance of a fluidized bed gasifier. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Artificial neural network models for prediction of intestinal permeability of oligopeptides

    Directory of Open Access Journals (Sweden)

    Kim Min-Kook

    2007-07-01

    Full Text Available Abstract Background Oral delivery is a highly desirable property for candidate drugs under development. Computational modeling could provide a quick and inexpensive way to assess the intestinal permeability of a molecule. Although there have been several studies aimed at predicting the intestinal absorption of chemical compounds, there have been no attempts to predict intestinal permeability on the basis of peptide sequence information. To develop models for predicting the intestinal permeability of peptides, we adopted an artificial neural network as a machine-learning algorithm. The positive control data consisted of intestinal barrier-permeable peptides obtained by the peroral phage display technique, and the negative control data were prepared from random sequences. Results The capacity of our models to make appropriate predictions was validated by statistical indicators including sensitivity, specificity, enrichment curve, and the area under the receiver operating characteristic (ROC curve (the ROC score. The training and test set statistics indicated that our models were of strikingly good quality and could discriminate between permeable and random sequences with a high level of confidence. Conclusion We developed artificial neural network models to predict the intestinal permeabilities of oligopeptides on the basis of peptide sequence information. Both binary and VHSE (principal components score Vectors of Hydrophobic, Steric and Electronic properties descriptors produced statistically significant training models; the models with simple neural network architectures showed slightly greater predictive power than those with complex ones. We anticipate that our models will be applicable to the selection of intestinal barrier-permeable peptides for generating peptide drugs or peptidomimetics.

  14. Localizing Tortoise Nests by Neural Networks.

    Directory of Open Access Journals (Sweden)

    Roberto Barbuti

    Full Text Available The goal of this research is to recognize the nest digging activity of tortoises using a device mounted atop the tortoise carapace. The device classifies tortoise movements in order to discriminate between nest digging, and non-digging activity (specifically walking and eating. Accelerometer data was collected from devices attached to the carapace of a number of tortoises during their two-month nesting period. Our system uses an accelerometer and an activity recognition system (ARS which is modularly structured using an artificial neural network and an output filter. For the purpose of experiment and comparison, and with the aim of minimizing the computational cost, the artificial neural network has been modelled according to three different architectures based on the input delay neural network (IDNN. We show that the ARS can achieve very high accuracy on segments of data sequences, with an extremely small neural network that can be embedded in programmable low power devices. Given that digging is typically a long activity (up to two hours, the application of ARS on data segments can be repeated over time to set up a reliable and efficient system, called Tortoise@, for digging activity recognition.

  15. Neural network controller for Active Demand-Side Management with PV energy in the residential sector

    International Nuclear Information System (INIS)

    Matallanas, E.; Castillo-Cagigal, M.; Gutiérrez, A.; Monasterio-Huelin, F.; Caamaño-Martín, E.; Masa, D.; Jiménez-Leube, J.

    2012-01-01

    Highlights: ► We have developed a neural controller for Active Demand-Side Management. ► The controller consists of Multilayer Perceptrons evolved with a genetic algorithm. ► The architecture of the controller is distributed and modular. ► The simulations show that the electrical local behavior improves. ► Active Demand-Side Management helps users to control his energy behaviour. -- Abstract: In this paper, we describe the development of a control system for Demand-Side Management in the residential sector with Distributed Generation. The electrical system under study incorporates local PV energy generation, an electricity storage system, connection to the grid and a home automation system. The distributed control system is composed of two modules: a scheduler and a coordinator, both implemented with neural networks. The control system enhances the local energy performance, scheduling the tasks demanded by the user and maximizing the use of local generation.

  16. Neural networks and their potential application in nuclear power plants

    International Nuclear Information System (INIS)

    Uhrig, R.E.

    1991-01-01

    A neural network is a data processing system consisting of a number of simple, highly interconnected processing elements in an architecture inspired by the structure of the cerebral cortex portion of the brain. Hence, neural networks are often capable of doing things which humans or animals do well but which conventional computers often do poorly. Neural networks have emerged in the past few years as an area of unusual opportunity for research, development and application to a variety of real world problems. Indeed, neural networks exhibit characteristics and capabilities not provided by any other technology. Examples include reading Japanese Kanji characters and human handwriting, reading a typewritten manuscript aloud, compensating for alignment errors in robots, interpreting very noise signals (e.g., electroencephalograms), modeling complex systems that cannot be modeled mathematically, and predicting whether proposed loans will be good or fail. This paper presents a brief tutorial on neural networks and describes research on the potential applications to nuclear power plants

  17. A Design Methodology for Efficient Implementation of Deconvolutional Neural Networks on an FPGA

    OpenAIRE

    Zhang, Xinyu; Das, Srinjoy; Neopane, Ojash; Kreutz-Delgado, Ken

    2017-01-01

    In recent years deep learning algorithms have shown extremely high performance on machine learning tasks such as image classification and speech recognition. In support of such applications, various FPGA accelerator architectures have been proposed for convolutional neural networks (CNNs) that enable high performance for classification tasks at lower power than CPU and GPU processors. However, to date, there has been little research on the use of FPGA implementations of deconvolutional neural...

  18. Convolutional Neural Networks for Human Activity Recognition Using Body-Worn Sensors

    Directory of Open Access Journals (Sweden)

    Fernando Moya Rueda

    2018-05-01

    Full Text Available Human activity recognition (HAR is a classification task for recognizing human movements. Methods of HAR are of great interest as they have become tools for measuring occurrences and durations of human actions, which are the basis of smart assistive technologies and manual processes analysis. Recently, deep neural networks have been deployed for HAR in the context of activities of daily living using multichannel time-series. These time-series are acquired from body-worn devices, which are composed of different types of sensors. The deep architectures process these measurements for finding basic and complex features in human corporal movements, and for classifying them into a set of human actions. As the devices are worn at different parts of the human body, we propose a novel deep neural network for HAR. This network handles sequence measurements from different body-worn devices separately. An evaluation of the architecture is performed on three datasets, the Oportunity, Pamap2, and an industrial dataset, outperforming the state-of-the-art. In addition, different network configurations will also be evaluated. We find that applying convolutions per sensor channel and per body-worn device improves the capabilities of convolutional neural network (CNNs.

  19. Unsupervised Learning in an Ensemble of Spiking Neural Networks Mediated by ITDP.

    Directory of Open Access Journals (Sweden)

    Yoonsik Shim

    2016-10-01

    Full Text Available We propose a biologically plausible architecture for unsupervised ensemble learning in a population of spiking neural network classifiers. A mixture of experts type organisation is shown to be effective, with the individual classifier outputs combined via a gating network whose operation is driven by input timing dependent plasticity (ITDP. The ITDP gating mechanism is based on recent experimental findings. An abstract, analytically tractable model of the ITDP driven ensemble architecture is derived from a logical model based on the probabilities of neural firing events. A detailed analysis of this model provides insights that allow it to be extended into a full, biologically plausible, computational implementation of the architecture which is demonstrated on a visual classification task. The extended model makes use of a style of spiking network, first introduced as a model of cortical microcircuits, that is capable of Bayesian inference, effectively performing expectation maximization. The unsupervised ensemble learning mechanism, based around such spiking expectation maximization (SEM networks whose combined outputs are mediated by ITDP, is shown to perform the visual classification task well and to generalize to unseen data. The combined ensemble performance is significantly better than that of the individual classifiers, validating the ensemble architecture and learning mechanisms. The properties of the full model are analysed in the light of extensive experiments with the classification task, including an investigation into the influence of different input feature selection schemes and a comparison with a hierarchical STDP based ensemble architecture.

  20. Unsupervised Learning in an Ensemble of Spiking Neural Networks Mediated by ITDP.

    Science.gov (United States)

    Shim, Yoonsik; Philippides, Andrew; Staras, Kevin; Husbands, Phil

    2016-10-01

    We propose a biologically plausible architecture for unsupervised ensemble learning in a population of spiking neural network classifiers. A mixture of experts type organisation is shown to be effective, with the individual classifier outputs combined via a gating network whose operation is driven by input timing dependent plasticity (ITDP). The ITDP gating mechanism is based on recent experimental findings. An abstract, analytically tractable model of the ITDP driven ensemble architecture is derived from a logical model based on the probabilities of neural firing events. A detailed analysis of this model provides insights that allow it to be extended into a full, biologically plausible, computational implementation of the architecture which is demonstrated on a visual classification task. The extended model makes use of a style of spiking network, first introduced as a model of cortical microcircuits, that is capable of Bayesian inference, effectively performing expectation maximization. The unsupervised ensemble learning mechanism, based around such spiking expectation maximization (SEM) networks whose combined outputs are mediated by ITDP, is shown to perform the visual classification task well and to generalize to unseen data. The combined ensemble performance is significantly better than that of the individual classifiers, validating the ensemble architecture and learning mechanisms. The properties of the full model are analysed in the light of extensive experiments with the classification task, including an investigation into the influence of different input feature selection schemes and a comparison with a hierarchical STDP based ensemble architecture.

  1. Architectural Prototyping in Industrial Practice

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak; Hansen, Klaus Marius

    2008-01-01

    Architectural prototyping is the process of using executable code to investigate stakeholders’ software architecture concerns with respect to a system under development. Previous work has established this as a useful and cost-effective way of exploration and learning of the design space of a system......, in addressing issues regarding quality attributes, in addressing architectural risks, and in addressing the problem of knowledge transfer and conformance. Little work has been reported so far on the actual industrial use of architectural prototyping. In this paper, we report from an ethnographical study...... and focus group involving architects from four companies in which we have focused on architectural prototypes. Our findings conclude that architectural prototypes play an important role in resolving problems experimentally, but less so in exploring alternative solutions. Furthermore, architectural...

  2. Artificial neural network modeling and optimization of ultrahigh pressure extraction of green tea polyphenols.

    Science.gov (United States)

    Xi, Jun; Xue, Yujing; Xu, Yinxiang; Shen, Yuhong

    2013-11-01

    In this study, the ultrahigh pressure extraction of green tea polyphenols was modeled and optimized by a three-layer artificial neural network. A feed-forward neural network trained with an error back-propagation algorithm was used to evaluate the effects of pressure, liquid/solid ratio and ethanol concentration on the total phenolic content of green tea extracts. The neural network coupled with genetic algorithms was also used to optimize the conditions needed to obtain the highest yield of tea polyphenols. The obtained optimal architecture of artificial neural network model involved a feed-forward neural network with three input neurons, one hidden layer with eight neurons and one output layer including single neuron. The trained network gave the minimum value in the MSE of 0.03 and the maximum value in the R(2) of 0.9571, which implied a good agreement between the predicted value and the actual value, and confirmed a good generalization of the network. Based on the combination of neural network and genetic algorithms, the optimum extraction conditions for the highest yield of green tea polyphenols were determined as follows: 498.8 MPa for pressure, 20.8 mL/g for liquid/solid ratio and 53.6% for ethanol concentration. The total phenolic content of the actual measurement under the optimum predicated extraction conditions was 582.4 ± 0.63 mg/g DW, which was well matched with the predicted value (597.2mg/g DW). This suggests that the artificial neural network model described in this work is an efficient quantitative tool to predict the extraction efficiency of green tea polyphenols. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  3. A Neutral-Network-Fusion Architecture for Automatic Extraction of Oceanographic Features from Satellite Remote Sensing Imagery

    National Research Council Canada - National Science Library

    Askari, Farid

    1999-01-01

    This report describes an approach for automatic feature detection from fusion of remote sensing imagery using a combination of neural network architecture and the Dempster-Shafer (DS) theory of evidence...

  4. Representation of linguistic form and function in recurrent neural networks

    NARCIS (Netherlands)

    Kadar, Akos; Chrupala, Grzegorz; Alishahi, Afra

    2017-01-01

    We present novel methods for analyzing the activation patterns of recurrent neural networks from a linguistic point of view and explore the types of linguistic structure they learn. As a case study, we use a standard standalone language model, and a multi-task gated recurrent network architecture

  5. Bidirectional Joint Representation Learning with Symmetrical Deep Neural Networks for Multimodal and Crossmodal Applications

    OpenAIRE

    Vukotic , Vedran; Raymond , Christian; Gravier , Guillaume

    2016-01-01

    International audience; Common approaches to problems involving multiple modalities (classification, retrieval, hyperlinking, etc.) are early fusion of the initial modalities and crossmodal translation from one modality to the other. Recently, deep neural networks, especially deep autoencoders, have proven promising both for crossmodal translation and for early fusion via multimodal embedding. In this work, we propose a flexible cross-modal deep neural network architecture for multimodal and ...

  6. The neural sociometer: brain mechanisms underlying state self-esteem.

    Science.gov (United States)

    Eisenberger, Naomi I; Inagaki, Tristen K; Muscatell, Keely A; Byrne Haltom, Kate E; Leary, Mark R

    2011-11-01

    On the basis of the importance of social connection for survival, humans may have evolved a "sociometer"-a mechanism that translates perceptions of rejection or acceptance into state self-esteem. Here, we explored the neural underpinnings of the sociometer by examining whether neural regions responsive to rejection or acceptance were associated with state self-esteem. Participants underwent fMRI while viewing feedback words ("interesting," "boring") ostensibly chosen by another individual (confederate) to describe the participant's previously recorded interview. Participants rated their state self-esteem in response to each feedback word. Results demonstrated that greater activity in rejection-related neural regions (dorsal ACC, anterior insula) and mentalizing regions was associated with lower-state self-esteem. Additionally, participants whose self-esteem decreased from prescan to postscan versus those whose self-esteem did not showed greater medial prefrontal cortical activity, previously associated with self-referential processing, in response to negative feedback. Together, the results inform our understanding of the origin and nature of our feelings about ourselves.

  7. Functional neural networks underlying response inhibition in adolescents and adults.

    Science.gov (United States)

    Stevens, Michael C; Kiehl, Kent A; Pearlson, Godfrey D; Calhoun, Vince D

    2007-07-19

    This study provides the first description of neural network dynamics associated with response inhibition in healthy adolescents and adults. Functional and effective connectivity analyses of whole brain hemodynamic activity elicited during performance of a Go/No-Go task were used to identify functionally integrated neural networks and characterize their causal interactions. Three response inhibition circuits formed a hierarchical, inter-dependent system wherein thalamic modulation of input to premotor cortex by fronto-striatal regions led to response suppression. Adolescents differed from adults in the degree of network engagement, regional fronto-striatal-thalamic connectivity, and network dynamics. We identify and characterize several age-related differences in the function of neural circuits that are associated with behavioral performance changes across adolescent development.

  8. Formal Models of the Network Co-occurrence Underlying Mental Operations.

    Directory of Open Access Journals (Sweden)

    Danilo Bzdok

    2016-06-01

    Full Text Available Systems neuroscience has identified a set of canonical large-scale networks in humans. These have predominantly been characterized by resting-state analyses of the task-unconstrained, mind-wandering brain. Their explicit relationship to defined task performance is largely unknown and remains challenging. The present work contributes a multivariate statistical learning approach that can extract the major brain networks and quantify their configuration during various psychological tasks. The method is validated in two extensive datasets (n = 500 and n = 81 by model-based generation of synthetic activity maps from recombination of shared network topographies. To study a use case, we formally revisited the poorly understood difference between neural activity underlying idling versus goal-directed behavior. We demonstrate that task-specific neural activity patterns can be explained by plausible combinations of resting-state networks. The possibility of decomposing a mental task into the relative contributions of major brain networks, the "network co-occurrence architecture" of a given task, opens an alternative access to the neural substrates of human cognition.

  9. Solving differential equations with unknown constitutive relations as recurrent neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Hagge, Tobias J.; Stinis, Panagiotis; Yeung, Enoch H.; Tartakovsky, Alexandre M.

    2017-12-08

    We solve a system of ordinary differential equations with an unknown functional form of a sink (reaction rate) term. We assume that the measurements (time series) of state variables are partially available, and use a recurrent neural network to “learn” the reaction rate from this data. This is achieved by including discretized ordinary differential equations as part of a recurrent neural network training problem. We extend TensorFlow’s recurrent neural network architecture to create a simple but scalable and effective solver for the unknown functions, and apply it to a fedbatch bioreactor simulation problem. Use of techniques from recent deep learning literature enables training of functions with behavior manifesting over thousands of time steps. Our networks are structurally similar to recurrent neural networks, but differ in purpose, and require modified training strategies.

  10. Design and FPGA-implementation of multilayer neural networks with on-chip learning

    International Nuclear Information System (INIS)

    Haggag, S.S.M.Y

    2008-01-01

    Artificial Neural Networks (ANN) is used in many applications in the industry because of their parallel structure, high speed, and their ability to give easy solution to complicated problems. For example identifying the orange and apple in the sorting machine with neural network is easier than using image processing techniques to do the same thing. There are different software for designing, training, and testing the ANN, but in order to use the ANN in the industry, it should be implemented on hardware outside the computer. Neural networks are artificial systems inspired on the brain's cognitive behavior, which can learn tasks with some degree of complexity, such as signal processing, diagnosis, robotics, image processing, and pattern recognition. Many applications demand a high computing power and the traditional software implementation are not sufficient.This thesis presents design and FPGA implementation of Multilayer Neural Networks with On-chip learning in re-configurable hardware. Hardware implementation of neural network algorithm is very interesting due their high performance and they can easily be made parallel. The architecture proposed herein takes advantage of distinct data paths for the forward and backward propagation stages and a pipelined adaptation of the on- line backpropagation algorithm to significantly improve the performance of the learning phase. The architecture is easily scalable and able to cope with arbitrary network sizes with the same hardware. The implementation is targeted diagnosis of the Research Reactor accidents to avoid the risk of occurrence of a nuclear accident. The proposed designed circuits are implemented using Xilinx FPGA Chip XC40150xv and occupied 73% of Chip CLBs. It achieved 10.8 μs to take decision in the forward propagation compared with current software implemented of RPS which take 24 ms. The results show that the proposed architecture leads to significant speed up comparing to high end software solutions. On

  11. Artificial neural network models for biomass gasification in fluidized bed gasifiers

    DEFF Research Database (Denmark)

    Puig Arnavat, Maria; Hernández, J. Alfredo; Bruno, Joan Carles

    2013-01-01

    Artificial neural networks (ANNs) have been applied for modeling biomass gasification process in fluidized bed reactors. Two architectures of ANNs models are presented; one for circulating fluidized bed gasifiers (CFB) and the other for bubbling fluidized bed gasifiers (BFB). Both models determine...

  12. Deep learning for steganalysis via convolutional neural networks

    Science.gov (United States)

    Qian, Yinlong; Dong, Jing; Wang, Wei; Tan, Tieniu

    2015-03-01

    Current work on steganalysis for digital images is focused on the construction of complex handcrafted features. This paper proposes a new paradigm for steganalysis to learn features automatically via deep learning models. We novelly propose a customized Convolutional Neural Network for steganalysis. The proposed model can capture the complex dependencies that are useful for steganalysis. Compared with existing schemes, this model can automatically learn feature representations with several convolutional layers. The feature extraction and classification steps are unified under a single architecture, which means the guidance of classification can be used during the feature extraction step. We demonstrate the effectiveness of the proposed model on three state-of-theart spatial domain steganographic algorithms - HUGO, WOW, and S-UNIWARD. Compared to the Spatial Rich Model (SRM), our model achieves comparable performance on BOSSbase and the realistic and large ImageNet database.

  13. Comparative Analysis of Maximum Power Point Tracking Controllers under Partial Shaded Conditions in a Photovoltaic System

    Directory of Open Access Journals (Sweden)

    R. Ramaprabha

    2015-06-01

    Full Text Available Mismatching effects due to partial shaded conditions are the major drawbacks existing in today’s photovoltaic (PV systems. These mismatch effects are greatly reduced in distributed PV system architecture where each panel is effectively decoupled from its neighboring panel. To obtain the optimal operation of the PV panels, maximum power point tracking (MPPT techniques are used. In partial shaded conditions, detecting the maximum operating point is difficult as the characteristic curves are complex with multiple peaks. In this paper, a neural network control technique is employed for MPPT. Detailed analyses were carried out on MPPT controllers in centralized and distributed architecture under partial shaded environments. The efficiency of the MPPT controllers and the effectiveness of the proposed control technique under partial shaded environments was examined using MATLAB software. The results were validated through experimentation.

  14. Characterization of Radar Signals Using Neural Networks

    Science.gov (United States)

    1990-12-01

    e***e*e*eeeeeeeeeeeesseeeeeese*eee*e*e************s /* Function Name: load.input.ptterns Number: 4.1 /* Description: This function determines wether ...XSE.last.layer Number: 8.5 */ /* Description: The function determines wether to backpropate the *f /* parameter by the sigmoidal or linear update...Sigmoidal Function," Mathematics of Control, Signals and Systems, 2:303-314 (March 1989). 6. Dayhoff, Judith E. Neural Network Architectures. New York: Van

  15. A Hexapod Walker Using a Heterarchical Architecture for Action Selection

    Directory of Open Access Journals (Sweden)

    Malte eSchilling

    2013-09-01

    Full Text Available Moving in a cluttered environment with a six-legged walking machine that has additional body actuators, therefore controlling 22 DoFs, is not a trivial task. Already simple forward walking on a flat plane requires the system to select between different internal states. The orchestration of these states depends on walking velocity and on external disturbances. Such disturbances occur continuously, for example due to irregular up-and-down movements of the body or slipping of the legs, even on flat surfaces, in particular when negotiating tight curves. The number of possible states is further increased when the system is allowed to walk backward or when front legs are used as grippers and cannot contribute to walking. Further states are necessary for expansion that allow for navigation. Here we demonstrate a solution for the selection and sequencing of different (attractor states required to control different behaviors as are forward walking at different speeds, backward walking, as well as negotiation of tight curves. This selection is made by a recurrent neural network of motivation units, controlling a bank of decentralized memory elements in combination with the feedback through the environment. The underlying heterarchical architecture of the network allows to select various combinations of these elements. This modular approach representing an example of neural reuse of a limited number of procedures allows for adaptation to different internal and external conditions. A way is sketched as to how this approach may be expanded to form a cognitive system being able to plan ahead. This architecture is characterized by different types of modules being arranged in layers and columns, but the complete network can also be considered as a holistic system showing emergent properties which cannot be attributed to a specific module.

  16. Interpretation of correlated neural variability from models of feed-forward and recurrent circuits

    Science.gov (United States)

    2018-01-01

    Neural populations respond to the repeated presentations of a sensory stimulus with correlated variability. These correlations have been studied in detail, with respect to their mechanistic origin, as well as their influence on stimulus discrimination and on the performance of population codes. A number of theoretical studies have endeavored to link network architecture to the nature of the correlations in neural activity. Here, we contribute to this effort: in models of circuits of stochastic neurons, we elucidate the implications of various network architectures—recurrent connections, shared feed-forward projections, and shared gain fluctuations—on the stimulus dependence in correlations. Specifically, we derive mathematical relations that specify the dependence of population-averaged covariances on firing rates, for different network architectures. In turn, these relations can be used to analyze data on population activity. We examine recordings from neural populations in mouse auditory cortex. We find that a recurrent network model with random effective connections captures the observed statistics. Furthermore, using our circuit model, we investigate the relation between network parameters, correlations, and how well different stimuli can be discriminated from one another based on the population activity. As such, our approach allows us to relate properties of the neural circuit to information processing. PMID:29408930

  17. Training Deep Convolutional Neural Networks with Resistive Cross-Point Devices.

    Science.gov (United States)

    Gokmen, Tayfun; Onen, Murat; Haensch, Wilfried

    2017-01-01

    In a previous work we have detailed the requirements for obtaining maximal deep learning performance benefit by implementing fully connected deep neural networks (DNN) in the form of arrays of resistive devices. Here we extend the concept of Resistive Processing Unit (RPU) devices to convolutional neural networks (CNNs). We show how to map the convolutional layers to fully connected RPU arrays such that the parallelism of the hardware can be fully utilized in all three cycles of the backpropagation algorithm. We find that the noise and bound limitations imposed by the analog nature of the computations performed on the arrays significantly affect the training accuracy of the CNNs. Noise and bound management techniques are presented that mitigate these problems without introducing any additional complexity in the analog circuits and that can be addressed by the digital circuits. In addition, we discuss digitally programmable update management and device variability reduction techniques that can be used selectively for some of the layers in a CNN. We show that a combination of all those techniques enables a successful application of the RPU concept for training CNNs. The techniques discussed here are more general and can be applied beyond CNN architectures and therefore enables applicability of the RPU approach to a large class of neural network architectures.

  18. Training Deep Convolutional Neural Networks with Resistive Cross-Point Devices

    Science.gov (United States)

    Gokmen, Tayfun; Onen, Murat; Haensch, Wilfried

    2017-01-01

    In a previous work we have detailed the requirements for obtaining maximal deep learning performance benefit by implementing fully connected deep neural networks (DNN) in the form of arrays of resistive devices. Here we extend the concept of Resistive Processing Unit (RPU) devices to convolutional neural networks (CNNs). We show how to map the convolutional layers to fully connected RPU arrays such that the parallelism of the hardware can be fully utilized in all three cycles of the backpropagation algorithm. We find that the noise and bound limitations imposed by the analog nature of the computations performed on the arrays significantly affect the training accuracy of the CNNs. Noise and bound management techniques are presented that mitigate these problems without introducing any additional complexity in the analog circuits and that can be addressed by the digital circuits. In addition, we discuss digitally programmable update management and device variability reduction techniques that can be used selectively for some of the layers in a CNN. We show that a combination of all those techniques enables a successful application of the RPU concept for training CNNs. The techniques discussed here are more general and can be applied beyond CNN architectures and therefore enables applicability of the RPU approach to a large class of neural network architectures. PMID:29066942

  19. Distributed computing methodology for training neural networks in an image-guided diagnostic application.

    Science.gov (United States)

    Plagianakos, V P; Magoulas, G D; Vrahatis, M N

    2006-03-01

    Distributed computing is a process through which a set of computers connected by a network is used collectively to solve a single problem. In this paper, we propose a distributed computing methodology for training neural networks for the detection of lesions in colonoscopy. Our approach is based on partitioning the training set across multiple processors using a parallel virtual machine. In this way, interconnected computers of varied architectures can be used for the distributed evaluation of the error function and gradient values, and, thus, training neural networks utilizing various learning methods. The proposed methodology has large granularity and low synchronization, and has been implemented and tested. Our results indicate that the parallel virtual machine implementation of the training algorithms developed leads to considerable speedup, especially when large network architectures and training sets are used.

  20. Control of root system architecture by DEEPER ROOTING 1 increases rice yield under drought conditions.

    Science.gov (United States)

    Uga, Yusaku; Sugimoto, Kazuhiko; Ogawa, Satoshi; Rane, Jagadish; Ishitani, Manabu; Hara, Naho; Kitomi, Yuka; Inukai, Yoshiaki; Ono, Kazuko; Kanno, Noriko; Inoue, Haruhiko; Takehisa, Hinako; Motoyama, Ritsuko; Nagamura, Yoshiaki; Wu, Jianzhong; Matsumoto, Takashi; Takai, Toshiyuki; Okuno, Kazutoshi; Yano, Masahiro

    2013-09-01

    The genetic improvement of drought resistance is essential for stable and adequate crop production in drought-prone areas. Here we demonstrate that alteration of root system architecture improves drought avoidance through the cloning and characterization of DEEPER ROOTING 1 (DRO1), a rice quantitative trait locus controlling root growth angle. DRO1 is negatively regulated by auxin and is involved in cell elongation in the root tip that causes asymmetric root growth and downward bending of the root in response to gravity. Higher expression of DRO1 increases the root growth angle, whereby roots grow in a more downward direction. Introducing DRO1 into a shallow-rooting rice cultivar by backcrossing enabled the resulting line to avoid drought by increasing deep rooting, which maintained high yield performance under drought conditions relative to the recipient cultivar. Our experiments suggest that control of root system architecture will contribute to drought avoidance in crops.

  1. An efficient architecture for LVQ-SLM for PAPR reduction

    International Nuclear Information System (INIS)

    Khalid, S.; Yasin, M.

    2010-01-01

    In this paper we propose an efficient architecture for the implementation of a LVQ (Learning Vector Quantization)NN (Neural Network), used as a classifier, for PAPR (Peak to Average Power Ratio) reduction. A special feature of the implementation is a combinatorial module for nearest neighbor search that allows online execution of this important operation during classification. The LVQ classifier is programmed in Verilog and the entire circuit is synthesized on FPGAs (Field Programmable Gate Arrays) using Xilinx at the rate ISE (Integrated Software Environment) 8.1i. The model is implemented with 64 sub carriers, considering the parametric values of WLANs standard IEEE 802.11a. Using the architecture, efficient on-line classification is achieved. (author)

  2. Recognition of decays of charged tracks with neural network techniques

    International Nuclear Information System (INIS)

    Stimpfl-Abele, G.

    1991-01-01

    We developed neural-network learning techniques for the recognition of decays of charged tracks using a feed-forward network with error back-propagation. Two completely different methods are described in detail and their efficiencies for several NN architectures are compared with conventional methods. Excellent results are obtained. (orig.)

  3. Feed Forward Neural Network and Optimal Control Problem with Control and State Constraints

    Science.gov (United States)

    Kmet', Tibor; Kmet'ová, Mária

    2009-09-01

    A feed forward neural network based optimal control synthesis is presented for solving optimal control problems with control and state constraints. The paper extends adaptive critic neural network architecture proposed by [5] to the optimal control problems with control and state constraints. The optimal control problem is transcribed into a nonlinear programming problem which is implemented with adaptive critic neural network. The proposed simulation method is illustrated by the optimal control problem of nitrogen transformation cycle model. Results show that adaptive critic based systematic approach holds promise for obtaining the optimal control with control and state constraints.

  4. From biological neural networks to thinking machines: Transitioning biological organizational principles to computer technology

    Science.gov (United States)

    Ross, Muriel D.

    1991-01-01

    The three-dimensional organization of the vestibular macula is under study by computer assisted reconstruction and simulation methods as a model for more complex neural systems. One goal of this research is to transition knowledge of biological neural network architecture and functioning to computer technology, to contribute to the development of thinking computers. Maculas are organized as weighted neural networks for parallel distributed processing of information. The network is characterized by non-linearity of its terminal/receptive fields. Wiring appears to develop through constrained randomness. A further property is the presence of two main circuits, highly channeled and distributed modifying, that are connected through feedforward-feedback collaterals and biasing subcircuit. Computer simulations demonstrate that differences in geometry of the feedback (afferent) collaterals affects the timing and the magnitude of voltage changes delivered to the spike initiation zone. Feedforward (efferent) collaterals act as voltage followers and likely inhibit neurons of the distributed modifying circuit. These results illustrate the importance of feedforward-feedback loops, of timing, and of inhibition in refining neural network output. They also suggest that it is the distributed modifying network that is most involved in adaptation, memory, and learning. Tests of macular adaptation, through hyper- and microgravitational studies, support this hypothesis since synapses in the distributed modifying circuit, but not the channeled circuit, are altered. Transitioning knowledge of biological systems to computer technology, however, remains problematical.

  5. Abnormal neural activation patterns underlying working memory impairment in chronic phencyclidine-treated mice.

    Directory of Open Access Journals (Sweden)

    Yosefu Arime

    Full Text Available Working memory impairment is a hallmark feature of schizophrenia and is thought be caused by dysfunctions in the prefrontal cortex (PFC and associated brain regions. However, the neural circuit anomalies underlying this impairment are poorly understood. The aim of this study is to assess working memory performance in the chronic phencyclidine (PCP mouse model of schizophrenia, and to identify the neural substrates of working memory. To address this issue, we conducted the following experiments for mice after withdrawal from chronic administration (14 days of either saline or PCP (10 mg/kg: (1 a discrete paired-trial variable-delay task in T-maze to assess working memory, and (2 brain-wide c-Fos mapping to identify activated brain regions relevant to this task performance either 90 min or 0 min after the completion of the task, with each time point examined under working memory effort and basal conditions. Correct responses in the test phase of the task were significantly reduced across delays (5, 15, and 30 s in chronic PCP-treated mice compared with chronic saline-treated controls, suggesting delay-independent impairments in working memory in the PCP group. In layer 2-3 of the prelimbic cortex, the number of working memory effort-elicited c-Fos+ cells was significantly higher in the chronic PCP group than in the chronic saline group. The main effect of working memory effort relative to basal conditions was to induce significantly increased c-Fos+ cells in the other layers of prelimbic cortex and the anterior cingulate and infralimbic cortex regardless of the different chronic regimens. Conversely, this working memory effort had a negative effect (fewer c-Fos+ cells in the ventral hippocampus. These results shed light on some putative neural networks relevant to working memory impairments in mice chronically treated with PCP, and emphasize the importance of the layer 2-3 of the prelimbic cortex of the PFC.

  6. Dynamic neural architecture for social knowledge retrieval.

    Science.gov (United States)

    Wang, Yin; Collins, Jessica A; Koski, Jessica; Nugiel, Tehila; Metoki, Athanasia; Olson, Ingrid R

    2017-04-18

    Social behavior is often shaped by the rich storehouse of biographical information that we hold for other people. In our daily life, we rapidly and flexibly retrieve a host of biographical details about individuals in our social network, which often guide our decisions as we navigate complex social interactions. Even abstract traits associated with an individual, such as their political affiliation, can cue a rich cascade of person-specific knowledge. Here, we asked whether the anterior temporal lobe (ATL) serves as a hub for a distributed neural circuit that represents person knowledge. Fifty participants across two studies learned biographical information about fictitious people in a 2-d training paradigm. On day 3, they retrieved this biographical information while undergoing an fMRI scan. A series of multivariate and connectivity analyses suggest that the ATL stores abstract person identity representations. Moreover, this region coordinates interactions with a distributed network to support the flexible retrieval of person attributes. Together, our results suggest that the ATL is a central hub for representing and retrieving person knowledge.

  7. Connecting Neurons to a Mobile Robot: An In Vitro Bidirectional Neural Interface

    Science.gov (United States)

    Novellino, A.; D'Angelo, P.; Cozzi, L.; Chiappalone, M.; Sanguineti, V.; Martinoia, S.

    2007-01-01

    One of the key properties of intelligent behaviors is the capability to learn and adapt to changing environmental conditions. These features are the result of the continuous and intense interaction of the brain with the external world, mediated by the body. For this reason “embodiment” represents an innovative and very suitable experimental paradigm when studying the neural processes underlying learning new behaviors and adapting to unpredicted situations. To this purpose, we developed a novel bidirectional neural interface. We interconnected in vitro neurons, extracted from rat embryos and plated on a microelectrode array (MEA), to external devices, thus allowing real-time closed-loop interaction. The novelty of this experimental approach entails the necessity to explore different computational schemes and experimental hypotheses. In this paper, we present an open, scalable architecture, which allows fast prototyping of different modules and where coding and decoding schemes and different experimental configurations can be tested. This hybrid system can be used for studying the computational properties and information coding in biological neuronal networks with far-reaching implications for the future development of advanced neuroprostheses. PMID:18350128

  8. Artificial Neural Networks to Detect Risk of Type 2 Diabetes | Baha ...

    African Journals Online (AJOL)

    A multilayer feedforward architecture with backpropagation algorithm was designed using Neural Network Toolbox of Matlab. The network was trained using batch mode backpropagation with gradient descent and momentum. Best performed network identified during the training was 2 hidden layers of 6 and 3 neurons, ...

  9. Ultrasonographic Findings of Mammographic Architectural Distortion

    International Nuclear Information System (INIS)

    Ma, Jeong Hyun; Kang, Bong Joo; Cha, Eun Suk; Hwangbo, Seol; Kim, Hyeon Sook; Park, Chang Suk; Kim, Sung Hun; Choi, Jae Jeong; Chung, Yong An

    2008-01-01

    To review the sonographic findings of various diseases showing architectural distortion depicted under mammography. We collected and reviewed architectural distortions observed under mammography at our health institution between 1 March 2004, and 28 February 2007. We collected 23 cases of sonographically-detected mammographic architectural distortions that confirmed lesions after surgical resection. The sonographic findings of mammographic architectural distortion were analyzed by use of the BI-RADS lexicon for shape, margin, lesion boundary, echo pattern, posterior acoustic feature and orientation. There were variable diseases that showed architectural distortion depicted under mammography. Fibrocystic disease was the most common presentation (n = 6), followed by adenosis (n = 2), stromal fibrosis (n = 2), radial scar (n = 3), usual ductal hyperplasia (n = 1), atypical ductal hyperplasia (n = 1) and mild fibrosis with microcalcification (n = 1). Malignant lesions such as ductal carcinoma in situ (DCIS) (n = 2), lobular carcinoma in situ (LCIS) (n = 2), invasive ductal carcinoma (n = 2) and invasive lobular carcinoma (n = 1) were observed. As observed by sonography, shape was divided as irregular (n = 22) and round (n = 1). Margin was divided as circumscribed (n = 1), indistinct (n = 7), angular (n = 1), microlobulated (n = 1) and sipculated (n = 13). Lesion boundary was divided as abrupt interface (n = 11) and echogenic halo (n = 12). Echo pattern was divided as hypoechoic (n = 20), anechoic (n = 1), hyperechoic (n = 1) and isoechoic (n = 1). Posterior acoustic feature was divided as posterior acoustic feature (n = 7), posterior acoustic shadow (n = 15) and complex posterior acoustic feature (n = 1). Orientation was divided as parallel (n = 12) and not parallel (n = 11). There were no differential sonographic findings between benign and malignant lesions. This study presented various sonographic findings of mammographic architectural distortion and that it is

  10. TRIGA control rod position and reactivity transient Monitoring by Neural Networks

    International Nuclear Information System (INIS)

    Rosa, R.; Palomba, M.; Sepielli, M.

    2008-01-01

    Plant sensors drift or malfunction and operator actions in nuclear reactor control can be supported by sensor on-line monitoring, and data validation through soft-computing process. On-line recalibration can often avoid manual calibration or drifting component replacement. DSP requires prompt response to the modified conditions. Artificial Neural Network (ANN) and Fuzzy logic ensure: prompt response, link with field measurement and physical system behaviour, data incoming interpretation, and detection of discrepancy for mis-calibration or sensor faults. ANN (Artificial Neural Network) is a system based on the operation of biological neural networks. Although computing is day by day advancing, there are certain tasks that a program made for a common microprocessor is unable to perform. A software implementation of an ANN can be made with Pros and Cons. Pros: A neural network can perform tasks that a linear program can not; When an element of the neural network fails, it can continue without any problem by their parallel nature; A neural network learns and does not need to be reprogrammed; It can be implemented in any application; It can be implemented without any problem. Cons: The architecture of a neural network is different from the architecture of microprocessors therefore needs to be emulated; it requires high processing time for large neural networks; and the neural network needs training to operate. Three possibilities of training exist: Supervised learning: the network is trained providing input and matching output patterns; Unsupervised learning: input patterns are not a priori classified and the system must develop its own representation of the input stimuli; Reinforcement Learning: intermediate form of the above two types of learning, the learning machine does some action on the environment and gets a feedback response from the environment. Two TRIGAN ANN applications are considered: control rod position and fuel temperature. The outcome obtained in this

  11. Computer architecture technology trends

    CERN Document Server

    1991-01-01

    Please note this is a Short Discount publication. This year's edition of Computer Architecture Technology Trends analyses the trends which are taking place in the architecture of computing systems today. Due to the sheer number of different applications to which computers are being applied, there seems no end to the different adoptions which proliferate. There are, however, some underlying trends which appear. Decision makers should be aware of these trends when specifying architectures, particularly for future applications. This report is fully revised and updated and provides insight in

  12. An Incremental Time-delay Neural Network for Dynamical Recurrent Associative Memory

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    An incremental time-delay neural network based on synapse growth, which is suitable for dynamic control and learning of autonomous robots, is proposed to improve the learning and retrieving performance of dynamical recurrent associative memory architecture. The model allows steady and continuous establishment of associative memory for spatio-temporal regularities and time series in discrete sequence of inputs. The inserted hidden units can be taken as the long-term memories that expand the capacity of network and sometimes may fade away under certain condition. Preliminary experiment has shown that this incremental network may be a promising approach to endow autonomous robots with the ability of adapting to new data without destroying the learned patterns. The system also benefits from its potential chaos character for emergence.

  13. Review of the Neural Oscillations Underlying Meditation

    Directory of Open Access Journals (Sweden)

    Darrin J. Lee

    2018-03-01

    Full Text Available Objective: Meditation is one type of mental training that has been shown to produce many cognitive benefits. Meditation practice is associated with improvement in concentration and reduction of stress, depression, and anxiety symptoms. Furthermore, different forms of meditation training are now being used as interventions for a variety of psychological and somatic illnesses. These benefits are thought to occur as a result of neurophysiologic changes. The most commonly studied specific meditation practices are focused attention (FA, open-monitoring (OM, as well as transcendental meditation (TM, and loving-kindness (LK meditation. In this review, we compare the neural oscillatory patterns during these forms of meditation.Method: We performed a systematic review of neural oscillations during FA, OM, TM, and LK meditation practices, comparing meditators to meditation-naïve adults.Results: FA, OM, TM, and LK meditation are associated with global increases in oscillatory activity in meditators compared to meditation-naïve adults, with larger changes occurring as the length of meditation training increases. While FA and OM are related to increases in anterior theta activity, only FA is associated with changes in posterior theta oscillations. Alpha activity increases in posterior brain regions during both FA and OM. In anterior regions, FA shows a bilateral increase in alpha power, while OM shows a decrease only in left-sided power. Gamma activity in these meditation practices is similar in frontal regions, but increases are variable in parietal and occipital regions.Conclusions: The current literature suggests distinct differences in neural oscillatory activity among FA, OM, TM, and LK meditation practices. Further characterizing these oscillatory changes may better elucidate the cognitive and therapeutic effects of specific meditation practices, and potentially lead to the development of novel neuromodulation targets to take advantage of their

  14. Performance of artificial neural networks and genetical evolved artificial neural networks unfolding techniques

    International Nuclear Information System (INIS)

    Ortiz R, J. M.; Martinez B, M. R.; Vega C, H. R.; Gallego D, E.; Lorente F, A.; Mendez V, R.; Los Arcos M, J. M.; Guerrero A, J. E.

    2011-01-01

    With the Bonner spheres spectrometer neutron spectrum is obtained through an unfolding procedure. Monte Carlo methods, Regularization, Parametrization, Least-squares, and Maximum Entropy are some of the techniques utilized for unfolding. In the last decade methods based on Artificial Intelligence Technology have been used. Approaches based on Genetic Algorithms and Artificial Neural Networks (Ann) have been developed in order to overcome the drawbacks of previous techniques. Nevertheless the advantages of Ann still it has some drawbacks mainly in the design process of the network, vg the optimum selection of the architectural and learning Ann parameters. In recent years the use of hybrid technologies, combining Ann and genetic algorithms, has been utilized to. In this work, several Ann topologies were trained and tested using Ann and Genetically Evolved Artificial Neural Networks in the aim to unfold neutron spectra using the count rates of a Bonner sphere spectrometer. Here, a comparative study of both procedures has been carried out. (Author)

  15. Recurrent Neural Network for Computing the Drazin Inverse.

    Science.gov (United States)

    Stanimirović, Predrag S; Zivković, Ivan S; Wei, Yimin

    2015-11-01

    This paper presents a recurrent neural network (RNN) for computing the Drazin inverse of a real matrix in real time. This recurrent neural network (RNN) is composed of n independent parts (subnetworks), where n is the order of the input matrix. These subnetworks can operate concurrently, so parallel and distributed processing can be achieved. In this way, the computational advantages over the existing sequential algorithms can be attained in real-time applications. The RNN defined in this paper is convenient for an implementation in an electronic circuit. The number of neurons in the neural network is the same as the number of elements in the output matrix, which represents the Drazin inverse. The difference between the proposed RNN and the existing ones for the Drazin inverse computation lies in their network architecture and dynamics. The conditions that ensure the stability of the defined RNN as well as its convergence toward the Drazin inverse are considered. In addition, illustrative examples and examples of application to the practical engineering problems are discussed to show the efficacy of the proposed neural network.

  16. Neural Alterations in Acquired Age-Related Hearing Loss

    Directory of Open Access Journals (Sweden)

    Raksha Anand Mudar

    2016-06-01

    Full Text Available Hearing loss is one of the most prevalent chronic health conditions in older adults. Growing evidence suggests that hearing loss is associated with reduced cognitive functioning and incident dementia. In this mini-review, we briefly examine literature on anatomical and functional alterations in the brains of adults with acquired age-associated hearing loss, which may underlie the cognitive consequences observed in this population, focusing on studies that have used structural and functional magnetic resonance imaging, diffusion tensor imaging, and event-related electroencephalography. We discuss structural and functional alterations observed in the temporal and frontal cortices and the limbic system. These neural alterations are discussed in the context of common cause, information-degradation, and sensory-deprivation hypotheses, and we suggest possible rehabilitation strategies. Although we are beginning to learn more about changes in neural architecture and functionality related to age-associated hearing loss, much work remains to be done. Understanding the neural alterations will provide objective markers for early identification of neural consequences of age-associated hearing loss and for evaluating benefits of intervention approaches.

  17. Implications of behavioral architecture for the evolution of self-organized division of labor.

    Directory of Open Access Journals (Sweden)

    A Duarte

    Full Text Available Division of labor has been studied separately from a proximate self-organization and an ultimate evolutionary perspective. We aim to bring together these two perspectives. So far this has been done by choosing a behavioral mechanism a priori and considering the evolution of the properties of this mechanism. Here we use artificial neural networks to allow for a more open architecture. We study whether emergent division of labor can evolve in two different network architectures; a simple feedforward network, and a more complex network that includes the possibility of self-feedback from previous experiences. We focus on two aspects of division of labor; worker specialization and the ratio of work performed for each task. Colony fitness is maximized by both reducing idleness and achieving a predefined optimal work ratio. Our results indicate that architectural constraints play an important role for the outcome of evolution. With the simplest network, only genetically determined specialization is possible. This imposes several limitations on worker specialization. Moreover, in order to minimize idleness, networks evolve a biased work ratio, even when an unbiased work ratio would be optimal. By adding self-feedback to the network we increase the network's flexibility and worker specialization evolves under a wider parameter range. Optimal work ratios are more easily achieved with the self-feedback network, but still provide a challenge when combined with worker specialization.

  18. Implications of behavioral architecture for the evolution of self-organized division of labor.

    Science.gov (United States)

    Duarte, A; Scholtens, E; Weissing, F J

    2012-01-01

    Division of labor has been studied separately from a proximate self-organization and an ultimate evolutionary perspective. We aim to bring together these two perspectives. So far this has been done by choosing a behavioral mechanism a priori and considering the evolution of the properties of this mechanism. Here we use artificial neural networks to allow for a more open architecture. We study whether emergent division of labor can evolve in two different network architectures; a simple feedforward network, and a more complex network that includes the possibility of self-feedback from previous experiences. We focus on two aspects of division of labor; worker specialization and the ratio of work performed for each task. Colony fitness is maximized by both reducing idleness and achieving a predefined optimal work ratio. Our results indicate that architectural constraints play an important role for the outcome of evolution. With the simplest network, only genetically determined specialization is possible. This imposes several limitations on worker specialization. Moreover, in order to minimize idleness, networks evolve a biased work ratio, even when an unbiased work ratio would be optimal. By adding self-feedback to the network we increase the network's flexibility and worker specialization evolves under a wider parameter range. Optimal work ratios are more easily achieved with the self-feedback network, but still provide a challenge when combined with worker specialization.

  19. Histological Architecture Underlying Brain-Immune Cell-Cell Interactions and the Cerebral Response to Systemic Inflammation.

    Science.gov (United States)

    Shimada, Atsuyoshi; Hasegawa-Ishii, Sanae

    2017-01-01

    Although the brain is now known to actively interact with the immune system under non-inflammatory conditions, the site of cell-cell interactions between brain parenchymal cells and immune cells has been an open question until recently. Studies by our and other groups have indicated that brain structures such as the leptomeninges, choroid plexus stroma and epithelium, attachments of choroid plexus, vascular endothelial cells, cells of the perivascular space, circumventricular organs, and astrocytic endfeet construct the histological architecture that provides a location for intercellular interactions between bone marrow-derived myeloid lineage cells and brain parenchymal cells under non-inflammatory conditions. This architecture also functions as the interface between the brain and the immune system, through which systemic inflammation-induced molecular events can be relayed to the brain parenchyma at early stages of systemic inflammation during which the blood-brain barrier is relatively preserved. Although brain microglia are well known to be activated by systemic inflammation, the mechanism by which systemic inflammatory challenge and microglial activation are connected has not been well documented. Perturbed brain-immune interaction underlies a wide variety of neurological and psychiatric disorders including ischemic brain injury, status epilepticus, repeated social defeat, and neurodegenerative diseases such as Alzheimer's disease and Parkinson's disease. Proinflammatory status associated with cytokine imbalance is involved in autism spectrum disorders, schizophrenia, and depression. In this article, we propose a mechanism connecting systemic inflammation, brain-immune interface cells, and brain parenchymal cells and discuss the relevance of basic studies of the mechanism to neurological disorders with a special emphasis on sepsis-associated encephalopathy and preterm brain injury.

  20. DANoC: An Efficient Algorithm and Hardware Codesign of Deep Neural Networks on Chip.

    Science.gov (United States)

    Zhou, Xichuan; Li, Shengli; Tang, Fang; Hu, Shengdong; Lin, Zhi; Zhang, Lei

    2017-07-18

    Deep neural networks (NNs) are the state-of-the-art models for understanding the content of images and videos. However, implementing deep NNs in embedded systems is a challenging task, e.g., a typical deep belief network could exhaust gigabytes of memory and result in bandwidth and computational bottlenecks. To address this challenge, this paper presents an algorithm and hardware codesign for efficient deep neural computation. A hardware-oriented deep learning algorithm, named the deep adaptive network, is proposed to explore the sparsity of neural connections. By adaptively removing the majority of neural connections and robustly representing the reserved connections using binary integers, the proposed algorithm could save up to 99.9% memory utility and computational resources without undermining classification accuracy. An efficient sparse-mapping-memory-based hardware architecture is proposed to fully take advantage of the algorithmic optimization. Different from traditional Von Neumann architecture, the deep-adaptive network on chip (DANoC) brings communication and computation in close proximity to avoid power-hungry parameter transfers between on-board memory and on-chip computational units. Experiments over different image classification benchmarks show that the DANoC system achieves competitively high accuracy and efficiency comparing with the state-of-the-art approaches.

  1. Modelling and Forecasting Cruise Tourism Demand to İzmir by Different Artificial Neural Network Architectures

    Directory of Open Access Journals (Sweden)

    Murat Cuhadar

    2014-03-01

    Full Text Available Abstract Cruise ports emerged as an important sector for the economy of Turkey bordered on three sides by water. Forecasting cruise tourism demand ensures better planning, efficient preparation at the destination and it is the basis for elaboration of future plans. In the recent years, new techniques such as; artificial neural networks were employed for developing of the predictive models to estimate tourism demand. In this study, it is aimed to determine the forecasting method that provides the best performance when compared the forecast accuracy of Multi-layer Perceptron (MLP, Radial Basis Function (RBF and Generalized Regression neural network (GRNN to estimate the monthly inbound cruise tourism demand to İzmir via the method giving best results. We used the total number of foreign cruise tourist arrivals as a measure of inbound cruise tourism demand and monthly cruise tourist arrivals to İzmir Cruise Port in the period of January 2005 ‐December 2013 were utilized to appropriate model. Experimental results showed that radial basis function (RBF neural network outperforms multi-layer perceptron (MLP and the generalised regression neural networks (GRNN in terms of forecasting accuracy. By the means of the obtained RBF neural network model, it has been forecasted the monthly inbound cruise tourism demand to İzmir for the year 2014.

  2. Information content of neural networks with self-control and variable activity

    International Nuclear Information System (INIS)

    Bolle, D.; Amari, S.I.; Dominguez Carreta, D.R.C.; Massolo, G.

    2001-01-01

    A self-control mechanism for the dynamics of neural networks with variable activity is discussed using a recursive scheme for the time evolution of the local field. It is based upon the introduction of a self-adapting time-dependent threshold as a function of both the neural and pattern activity in the network. This mechanism leads to an improvement of the information content of the network as well as an increase of the storage capacity and the basins of attraction. Different architectures are considered and the results are compared with numerical simulations

  3. Forecasting influenza-like illness dynamics for military populations using neural networks and social media.

    Directory of Open Access Journals (Sweden)

    Svitlana Volkova

    Full Text Available This work is the first to take advantage of recurrent neural networks to predict influenza-like illness (ILI dynamics from various linguistic signals extracted from social media data. Unlike other approaches that rely on timeseries analysis of historical ILI data and the state-of-the-art machine learning models, we build and evaluate the predictive power of neural network architectures based on Long Short Term Memory (LSTMs units capable of nowcasting (predicting in "real-time" and forecasting (predicting the future ILI dynamics in the 2011 - 2014 influenza seasons. To build our models we integrate information people post in social media e.g., topics, embeddings, word ngrams, stylistic patterns, and communication behavior using hashtags and mentions. We then quantitatively evaluate the predictive power of different social media signals and contrast the performance of the-state-of-the-art regression models with neural networks using a diverse set of evaluation metrics. Finally, we combine ILI and social media signals to build a joint neural network model for ILI dynamics prediction. Unlike the majority of the existing work, we specifically focus on developing models for local rather than national ILI surveillance, specifically for military rather than general populations in 26 U.S. and six international locations., and analyze how model performance depends on the amount of social media data available per location. Our approach demonstrates several advantages: (a Neural network architectures that rely on LSTM units trained on social media data yield the best performance compared to previously used regression models. (b Previously under-explored language and communication behavior features are more predictive of ILI dynamics than stylistic and topic signals expressed in social media. (c Neural network models learned exclusively from social media signals yield comparable or better performance to the models learned from ILI historical data, thus

  4. Forecasting influenza-like illness dynamics for military populations using neural networks and social media.

    Science.gov (United States)

    Volkova, Svitlana; Ayton, Ellyn; Porterfield, Katherine; Corley, Courtney D

    2017-01-01

    This work is the first to take advantage of recurrent neural networks to predict influenza-like illness (ILI) dynamics from various linguistic signals extracted from social media data. Unlike other approaches that rely on timeseries analysis of historical ILI data and the state-of-the-art machine learning models, we build and evaluate the predictive power of neural network architectures based on Long Short Term Memory (LSTMs) units capable of nowcasting (predicting in "real-time") and forecasting (predicting the future) ILI dynamics in the 2011 - 2014 influenza seasons. To build our models we integrate information people post in social media e.g., topics, embeddings, word ngrams, stylistic patterns, and communication behavior using hashtags and mentions. We then quantitatively evaluate the predictive power of different social media signals and contrast the performance of the-state-of-the-art regression models with neural networks using a diverse set of evaluation metrics. Finally, we combine ILI and social media signals to build a joint neural network model for ILI dynamics prediction. Unlike the majority of the existing work, we specifically focus on developing models for local rather than national ILI surveillance, specifically for military rather than general populations in 26 U.S. and six international locations., and analyze how model performance depends on the amount of social media data available per location. Our approach demonstrates several advantages: (a) Neural network architectures that rely on LSTM units trained on social media data yield the best performance compared to previously used regression models. (b) Previously under-explored language and communication behavior features are more predictive of ILI dynamics than stylistic and topic signals expressed in social media. (c) Neural network models learned exclusively from social media signals yield comparable or better performance to the models learned from ILI historical data, thus, signals from

  5. Rule extraction from minimal neural networks for credit card screening.

    Science.gov (United States)

    Setiono, Rudy; Baesens, Bart; Mues, Christophe

    2011-08-01

    While feedforward neural networks have been widely accepted as effective tools for solving classification problems, the issue of finding the best network architecture remains unresolved, particularly so in real-world problem settings. We address this issue in the context of credit card screening, where it is important to not only find a neural network with good predictive performance but also one that facilitates a clear explanation of how it produces its predictions. We show that minimal neural networks with as few as one hidden unit provide good predictive accuracy, while having the added advantage of making it easier to generate concise and comprehensible classification rules for the user. To further reduce model size, a novel approach is suggested in which network connections from the input units to this hidden unit are removed by a very straightaway pruning procedure. In terms of predictive accuracy, both the minimized neural networks and the rule sets generated from them are shown to compare favorably with other neural network based classifiers. The rules generated from the minimized neural networks are concise and thus easier to validate in a real-life setting.

  6. Tests of track segment and vertex finding with neural networks

    International Nuclear Information System (INIS)

    Denby, B.; Lessner, E.; Lindsey, C.S.

    1990-04-01

    Feed forward neural networks have been trained, using back-propagation, to find the slopes of simulated track segments in a straw chamber and to find the vertex of tracks from both simulated and real events in a more conventional drift chamber geometry. Network architectures, training, and performance are presented. 12 refs., 7 figs

  7. Optical implementation of a feature-based neural network with application to automatic target recognition

    Science.gov (United States)

    Chao, Tien-Hsin; Stoner, William W.

    1993-01-01

    An optical neural network based on the neocognitron paradigm is introduced. A novel aspect of the architecture design is shift-invariant multichannel Fourier optical correlation within each processing layer. Multilayer processing is achieved by feeding back the ouput of the feature correlator interatively to the input spatial light modulator and by updating the Fourier filters. By training the neural net with characteristic features extracted from the target images, successful pattern recognition with intraclass fault tolerance and interclass discrimination is achieved. A detailed system description is provided. Experimental demonstrations of a two-layer neural network for space-object discrimination is also presented.

  8. Measuring Customer Behavior with Deep Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Veaceslav Albu

    2016-03-01

    Full Text Available In this paper, we propose a neural network model for human emotion and gesture classification. We demonstrate that the proposed architecture represents an effective tool for real-time processing of customer's behavior for distributed on-land systems, such as information kiosks, automated cashiers and ATMs. The proposed approach combines most recent biometric techniques with the neural network approach for real-time emotion and behavioral analysis. In the series of experiments, emotions of human subjects were recorded, recognized, and analyzed to give statistical feedback of the overall emotions of a number of targets within a certain time frame. The result of the study allows automatic tracking of user’s behavior based on a limited set of observations.

  9. Single-Iteration Learning Algorithm for Feed-Forward Neural Networks

    Energy Technology Data Exchange (ETDEWEB)

    Barhen, J.; Cogswell, R.; Protopopescu, V.

    1999-07-31

    A new methodology for neural learning is presented, whereby only a single iteration is required to train a feed-forward network with near-optimal results. To this aim, a virtual input layer is added to the multi-layer architecture. The virtual input layer is connected to the nominal input layer by a specird nonlinear transfer function, and to the fwst hidden layer by regular (linear) synapses. A sequence of alternating direction singular vrdue decompositions is then used to determine precisely the inter-layer synaptic weights. This algorithm exploits the known separability of the linear (inter-layer propagation) and nonlinear (neuron activation) aspects of information &ansfer within a neural network.

  10. Fully Connected Cascade Artificial Neural Network Architecture for Attention Deficit Hyperactivity Disorder Classification From Functional Magnetic Resonance Imaging Data.

    Science.gov (United States)

    Deshpande, Gopikrishna; Wang, Peng; Rangaprakash, D; Wilamowski, Bogdan

    2015-12-01

    Automated recognition and classification of brain diseases are of tremendous value to society. Attention deficit hyperactivity disorder (ADHD) is a diverse spectrum disorder whose diagnosis is based on behavior and hence will benefit from classification utilizing objective neuroimaging measures. Toward this end, an international competition was conducted for classifying ADHD using functional magnetic resonance imaging data acquired from multiple sites worldwide. Here, we consider the data from this competition as an example to illustrate the utility of fully connected cascade (FCC) artificial neural network (ANN) architecture for performing classification. We employed various directional and nondirectional brain connectivity-based methods to extract discriminative features which gave better classification accuracy compared to raw data. Our accuracy for distinguishing ADHD from healthy subjects was close to 90% and between the ADHD subtypes was close to 95%. Further, we show that, if properly used, FCC ANN performs very well compared to other classifiers such as support vector machines in terms of accuracy, irrespective of the feature used. Finally, the most discriminative connectivity features provided insights about the pathophysiology of ADHD and showed reduced and altered connectivity involving the left orbitofrontal cortex and various cerebellar regions in ADHD.

  11. Artificial neural network study on organ-targeting peptides

    Science.gov (United States)

    Jung, Eunkyoung; Kim, Junhyoung; Choi, Seung-Hoon; Kim, Minkyoung; Rhee, Hokyoung; Shin, Jae-Min; Choi, Kihang; Kang, Sang-Kee; Lee, Nam Kyung; Choi, Yun-Jaie; Jung, Dong Hyun

    2010-01-01

    We report a new approach to studying organ targeting of peptides on the basis of peptide sequence information. The positive control data sets consist of organ-targeting peptide sequences identified by the peroral phage-display technique for four organs, and the negative control data are prepared from random sequences. The capacity of our models to make appropriate predictions is validated by statistical indicators including sensitivity, specificity, enrichment curve, and the area under the receiver operating characteristic (ROC) curve (the ROC score). VHSE descriptor produces statistically significant training models and the models with simple neural network architectures show slightly greater predictive power than those with complex ones. The training and test set statistics indicate that our models could discriminate between organ-targeting and random sequences. We anticipate that our models will be applicable to the selection of organ-targeting peptides for generating peptide drugs or peptidomimetics.

  12. Neural components of altruistic punishment

    Directory of Open Access Journals (Sweden)

    Emily eDu

    2015-02-01

    Full Text Available Altruistic punishment, which occurs when an individual incurs a cost to punish in response to unfairness or a norm violation, may play a role in perpetuating cooperation. The neural correlates underlying costly punishment have only recently begun to be explored. Here we review the current state of research on the neural basis of altruism from the perspectives of costly punishment, emphasizing the importance of characterizing elementary neural processes underlying a decision to punish. In particular, we emphasize three cognitive processes that contribute to the decision to altruistically punish in most scenarios: inequity aversion, cost-benefit calculation, and social reference frame to distinguish self from others. Overall, we argue for the importance of understanding the neural correlates of altruistic punishment with respect to the core computations necessary to achieve a decision to punish.

  13. Implementation of neural networks on 'Connection Machine'

    International Nuclear Information System (INIS)

    Belmonte, Ghislain

    1990-12-01

    This report is a first approach to the notion of neural networks and their possible applications within the framework of artificial intelligence activities of the Department of Applied Mathematics of the Limeil-Valenton Research Center. The first part is an introduction to the field of neural networks; the main neural network models are described in this section. The applications of neural networks in the field of classification have mainly been studied because they could more particularly help to solve some of the decision support problems dealt with by the C.E.A. As the neural networks perform a large number of parallel operations, it was therefore logical to use a parallel architecture computer: the Connection Machine (which uses 16384 processors and is located at E.T.C.A. Arcueil). The second part presents some generalities on the parallelism and the Connection Machine, and two implementations of neural networks on Connection Machine. The first of these implementations concerns one of the most used algorithms to realize the learning of neural networks: the Gradient Retro-propagation algorithm. The second one, less common, concerns a network of neurons destined mainly to the recognition of forms: the Fukushima Neocognitron. The latter is studied by the C.E.A. of Bruyeres-le-Chatel in order to realize an embedded system (including hardened circuits) for the fast recognition of forms [fr

  14. An automatic microseismic or acoustic emission arrival identification scheme with deep recurrent neural networks

    Science.gov (United States)

    Zheng, Jing; Lu, Jiren; Peng, Suping; Jiang, Tianqi

    2018-02-01

    The conventional arrival pick-up algorithms cannot avoid the manual modification of the parameters for the simultaneous identification of multiple events under different signal-to-noise ratios (SNRs). Therefore, in order to automatically obtain the arrivals of multiple events with high precision under different SNRs, in this study an algorithm was proposed which had the ability to pick up the arrival of microseismic or acoustic emission events based on deep recurrent neural networks. The arrival identification was performed using two important steps, which included a training phase and a testing phase. The training process was mathematically modelled by deep recurrent neural networks using Long Short-Term Memory architecture. During the testing phase, the learned weights were utilized to identify the arrivals through the microseismic/acoustic emission data sets. The data sets were obtained by rock physics experiments of the acoustic emission. In order to obtain the data sets under different SNRs, this study added random noise to the raw experiments' data sets. The results showed that the outcome of the proposed method was able to attain an above 80 per cent hit-rate at SNR 0 dB, and an approximately 70 per cent hit-rate at SNR -5 dB, with an absolute error in 10 sampling points. These results indicated that the proposed method had high selection precision and robustness.

  15. Filtering and spectral processing of 1-D signals using cellular neural networks

    NARCIS (Netherlands)

    Moreira-Tamayo, O.; Pineda de Gyvez, J.

    1996-01-01

    This paper presents cellular neural networks (CNN) for one-dimensional discrete signal processing. Although CNN has been extensively used in image processing applications, little has been done for 1-dimensional signal processing. We propose a novel CNN architecture to carry out these tasks. This

  16. Brain states recognition during visual perception by means of artificial neural network in the different EEG frequency ranges

    Science.gov (United States)

    Musatov, V. Yu.; Runnova, A. E.; Andreev, A. V.; Zhuravlev, M. O.

    2018-04-01

    In the present paper, the possibility of classification by artificial neural networks of a certain architecture of ambiguous images is investigated using the example of the Necker cube from the experimentally obtained EEG recording data of several operators. The possibilities of artificial neural network classification of ambiguous images are investigated in the different frequency ranges of EEG recording signals.

  17. Training Deep Convolutional Neural Networks with Resistive Cross-Point Devices

    Directory of Open Access Journals (Sweden)

    Tayfun Gokmen

    2017-10-01

    Full Text Available In a previous work we have detailed the requirements for obtaining maximal deep learning performance benefit by implementing fully connected deep neural networks (DNN in the form of arrays of resistive devices. Here we extend the concept of Resistive Processing Unit (RPU devices to convolutional neural networks (CNNs. We show how to map the convolutional layers to fully connected RPU arrays such that the parallelism of the hardware can be fully utilized in all three cycles of the backpropagation algorithm. We find that the noise and bound limitations imposed by the analog nature of the computations performed on the arrays significantly affect the training accuracy of the CNNs. Noise and bound management techniques are presented that mitigate these problems without introducing any additional complexity in the analog circuits and that can be addressed by the digital circuits. In addition, we discuss digitally programmable update management and device variability reduction techniques that can be used selectively for some of the layers in a CNN. We show that a combination of all those techniques enables a successful application of the RPU concept for training CNNs. The techniques discussed here are more general and can be applied beyond CNN architectures and therefore enables applicability of the RPU approach to a large class of neural network architectures.

  18. Hearing loss impacts neural alpha oscillations under adverse listening conditions

    OpenAIRE

    Petersen, Eline B.; Wöstmann, Malte; Obleser, Jonas; Stenfelt, Stefan; Lunner, Thomas

    2015-01-01

    Degradations in external, acoustic stimulation have long been suspected to increase the load on working memory (WM). One neural signature of WM load is enhanced power of alpha oscillations (6–12 Hz). However, it is unknown to what extent common internal, auditory degradation, that is, hearing impairment, affects the neural mechanisms of WM when audibility has been ensured via amplification. Using an adapted auditory Sternberg paradigm, we varied the orthogonal factors memory load and backgrou...

  19. Neural network modelling of planform geometry of headland-bay beaches

    Science.gov (United States)

    Iglesias, G.; López, I.; Castro, A.; Carballo, R.

    2009-02-01

    The shoreline of beaches in the lee of coastal salients or man-made structures, usually known as headland-bay beaches, has a distinctive curvature; wave fronts curve as a result of wave diffraction at the headland and in turn cause the shoreline to bend. The ensuing curved planform is of great interest both as a peculiar landform and in the context of engineering projects in which it is necessary to predict how a coastal structure will affect the sandy shoreline in its lee. A number of empirical models have been put forward, each based on a specific equation. A novel approach, based on the application of artificial neural networks, is presented in this work. Unlike the conventional method, no particular equation of the planform is embedded in the model. Instead, it is the model itself that learns about the problem from a series of examples of headland-bay beaches (the training set) and thereafter applies this self-acquired knowledge to other cases (the test set) for validation. Twenty-three headland-bay beaches from around the world were selected, of which sixteen and seven make up the training and test sets, respectively. As there is no well-developed theory for deciding upon the most convenient neural network architecture to deal with a particular data set, an experimental study was conducted in which ten different architectures with one and two hidden neuron layers and five training algorithms - 50 different options combining network architecture and training algorithm - were compared. Each of these options was implemented, trained and tested in order to find the best-performing approach for modelling the planform of headland-bay beaches. Finally, the selected neural network model was compared with a state-of-the-art planform model and was shown to outperform it.

  20. Autoshaping and Automaintenance: A Neural-Network Approach

    OpenAIRE

    Burgos, José E

    2007-01-01

    This article presents an interpretation of autoshaping, and positive and negative automaintenance, based on a neural-network model. The model makes no distinction between operant and respondent learning mechanisms, and takes into account knowledge of hippocampal and dopaminergic systems. Four simulations were run, each one using an A-B-A design and four instances of feedfoward architectures. In A, networks received a positive contingency between inputs that simulated a conditioned stimulus (C...

  1. Development and Evaluation of Micro-Electrocorticography Arrays for Neural Interfacing Applications

    Science.gov (United States)

    Schendel, Amelia Ann

    Neural interfaces have great promise for both electrophysiological research and therapeutic applications. Whether for the study of neural circuitry or for neural prosthetic or other therapeutic applications, micro-electrocorticography (micro-ECoG) arrays have proven extremely useful as neural interfacing devices. These devices strike a balance between invasiveness and signal resolution, an important step towards eventual human application. The objective of this research was to make design improvements to micro-ECoG devices to enhance both biocompatibility and device functionality. To best evaluate the effectiveness of these improvements, a cranial window imaging method for in vivo monitoring of the longitudinal tissue response post device implant was developed. Employment of this method provided valuable insight into the way tissue grows around micro-ECoG arrays after epidural implantation, spurring a study of the effects of substrate geometry on the meningeal tissue response. The results of the substrate footprint comparison suggest that a more open substrate geometry provides an easy path for the tissue to grow around to the top side of the device, whereas a solid device substrate encourages the tissue to thicken beneath the device, between the electrode sites and the brain. The formation of thick scar tissue between the recording electrode sites and the neural tissue is disadvantageous for long-term recorded signal quality, and thus future micro-ECoG device designs should incorporate open-architecture substrates for enhanced longitudinal in vivo function. In addition to investigating improvements for long-term device reliability, it was also desired to enhance the functionality of micro-ECoG devices for neural electrophysiology research applications. To achieve this goal, a completely transparent graphene-based device was fabricated for use with the cranial window imaging method and optogenetic techniques. The use of graphene as the conductive material provided

  2. Recurrent Artificial Neural Networks and Finite State Natural Language Processing.

    Science.gov (United States)

    Moisl, Hermann

    It is argued that pessimistic assessments of the adequacy of artificial neural networks (ANNs) for natural language processing (NLP) on the grounds that they have a finite state architecture are unjustified, and that their adequacy in this regard is an empirical issue. First, arguments that counter standard objections to finite state NLP on the…

  3. Bridging the Gap: Towards a Cell-Type Specific Understanding of Neural Circuits Underlying Fear Behaviors

    Science.gov (United States)

    McCullough, KM; Morrison, FG; Ressler, KJ

    2016-01-01

    Fear and anxiety-related disorders are remarkably common and debilitating, and are often characterized by dysregulated fear responses. Rodent models of fear learning and memory have taken great strides towards elucidating the specific neuronal circuitries underlying the learning of fear responses. The present review addresses recent research utilizing optogenetic approaches to parse circuitries underlying fear behaviors. It also highlights the powerful advances made when optogenetic techniques are utilized in a genetically defined, cell-type specific, manner. The application of next-generation genetic and sequencing approaches in a cell-type specific context will be essential for a mechanistic understanding of the neural circuitry underlying fear behavior and for the rational design of targeted, circuit specific, pharmacologic interventions for the treatment and prevention of fear-related disorders. PMID:27470092

  4. Gelatin methacrylamide hydrogel with graphene nanoplatelets for neural cell-laden 3D bioprinting.

    Science.gov (United States)

    Wei Zhu; Harris, Brent T; Zhang, Lijie Grace

    2016-08-01

    Nervous system is extremely complex which leads to rare regrowth of nerves once injury or disease occurs. Advanced 3D bioprinting strategy, which could simultaneously deposit biocompatible materials, cells and supporting components in a layer-by-layer manner, may be a promising solution to address neural damages. Here we presented a printable nano-bioink composed of gelatin methacrylamide (GelMA), neural stem cells, and bioactive graphene nanoplatelets to target nerve tissue regeneration in the assist of stereolithography based 3D bioprinting technique. We found the resultant GelMA hydrogel has a higher compressive modulus with an increase of GelMA concentration. The porous GelMA hydrogel can provide a biocompatible microenvironment for the survival and growth of neural stem cells. The cells encapsulated in the hydrogel presented good cell viability at the low GelMA concentration. Printed neural construct exhibited well-defined architecture and homogenous cell distribution. In addition, neural stem cells showed neuron differentiation and neurites elongation within the printed construct after two weeks of culture. These findings indicate the 3D bioprinted neural construct has great potential for neural tissue regeneration.

  5. GPU implementation of Bayesian neural network construction for data-intensive applications

    International Nuclear Information System (INIS)

    Perry, Michelle; Meyer-Baese, Anke; Prosper, Harrison B

    2014-01-01

    We describe a graphical processing unit (GPU) implementation of the Hybrid Markov Chain Monte Carlo (HMC) method for training Bayesian Neural Networks (BNN). Our implementation uses NVIDIA's parallel computing architecture, CUDA. We briefly review BNNs and the HMC method and we describe our implementations and give preliminary results.

  6. Convolutional neural networks for segmentation and object detection of human semen

    DEFF Research Database (Denmark)

    Nissen, Malte Stær; Krause, Oswin; Almstrup, Kristian

    2017-01-01

    We compare a set of convolutional neural network (CNN) architectures for the task of segmenting and detecting human sperm cells in an image taken from a semen sample. In contrast to previous work, samples are not stained or washed to allow for full sperm quality analysis, making analysis harder due...

  7. A neural network underlying intentional emotional facial expression in neurodegenerative disease

    Directory of Open Access Journals (Sweden)

    Kelly A. Gola

    2017-01-01

    Full Text Available Intentional facial expression of emotion is critical to healthy social interactions. Patients with neurodegenerative disease, particularly those with right temporal or prefrontal atrophy, show dramatic socioemotional impairment. This was an exploratory study examining the neural and behavioral correlates of intentional facial expression of emotion in neurodegenerative disease patients and healthy controls. One hundred and thirty three participants (45 Alzheimer's disease, 16 behavioral variant frontotemporal dementia, 8 non-fluent primary progressive aphasia, 10 progressive supranuclear palsy, 11 right-temporal frontotemporal dementia, 9 semantic variant primary progressive aphasia patients and 34 healthy controls were video recorded while imitating static images of emotional faces and producing emotional expressions based on verbal command; the accuracy of their expression was rated by blinded raters. Participants also underwent face-to-face socioemotional testing and informants described participants' typical socioemotional behavior. Patients' performance on emotion expression tasks was correlated with gray matter volume using voxel-based morphometry (VBM across the entire sample. We found that intentional emotional imitation scores were related to fundamental socioemotional deficits; patients with known socioemotional deficits performed worse than controls on intentional emotion imitation; and intentional emotional expression predicted caregiver ratings of empathy and interpersonal warmth. Whole brain VBMs revealed a rightward cortical atrophy pattern homologous to the left lateralized speech production network was associated with intentional emotional imitation deficits. Results point to a possible neural mechanisms underlying complex socioemotional communication deficits in neurodegenerative disease patients.

  8. Neural pattern similarity underlies the mnemonic advantages for living words.

    Science.gov (United States)

    Xiao, Xiaoqian; Dong, Qi; Chen, Chuansheng; Xue, Gui

    2016-06-01

    It has been consistently shown that words representing living things are better remembered than words representing nonliving things, yet the underlying cognitive and neural mechanisms have not been clearly elucidated. The present study used both univariate and multivariate pattern analyses to examine the hypotheses that living words are better remembered because (1) they draw more attention and/or (2) they share more overlapping semantic features. Subjects were asked to study a list of living and nonliving words during a semantic judgment task. An unexpected recognition test was administered 30 min later. We found that subjects recognized significantly more living words than nonliving words. Results supported the overlapping semantic feature hypothesis by showing that (a) semantic ratings showed greater semantic similarity for living words than for nonliving words, (b) there was also significantly greater neural global pattern similarity (nGPS) for living words than for nonliving words in the posterior portion of left parahippocampus (LpPHG), (c) the nGPS in the LpPHG reflected the rated semantic similarity, and also mediated the memory differences between two semantic categories, and (d) greater univariate activation was found for living words than for nonliving words in the left hippocampus (LHIP), which mediated the better memory performance for living words and might reflect greater semantic context binding. In contrast, although living words were processed faster and elicited a stronger activity in the dorsal attention network, these differences did not mediate the animacy effect in memory. Taken together, our results provide strong support to the overlapping semantic features hypothesis, and emphasize the important role of semantic organization in episodic memory encoding. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. From green architecture to architectural green

    DEFF Research Database (Denmark)

    Earon, Ofri

    2011-01-01

    that describes the architectural exclusivity of this particular architecture genre. The adjective green expresses architectural qualities differentiating green architecture from none-green architecture. Currently, adding trees and vegetation to the building’s facade is the main architectural characteristics...... they have overshadowed the architectural potential of green architecture. The paper questions how a green space should perform, look like and function. Two examples are chosen to demonstrate thorough integrations between green and space. The examples are public buildings categorized as pavilions. One......The paper investigates the topic of green architecture from an architectural point of view and not an energy point of view. The purpose of the paper is to establish a debate about the architectural language and spatial characteristics of green architecture. In this light, green becomes an adjective...

  10. Empirical modeling of nuclear power plants using neural networks

    International Nuclear Information System (INIS)

    Parlos, A.G.; Atiya, A.; Chong, K.T.

    1991-01-01

    A summary of a procedure for nonlinear identification of process dynamics encountered in nuclear power plant components is presented in this paper using artificial neural systems. A hybrid feedforward/feedback neural network, namely, a recurrent multilayer perceptron, is used as the nonlinear structure for system identification. In the overall identification process, the feedforward portion of the network architecture provides its well-known interpolation property, while through recurrency and cross-talk, the local information feedback enables representation of time-dependent system nonlinearities. The standard backpropagation learning algorithm is modified and is used to train the proposed hybrid network in a supervised manner. The performance of recurrent multilayer perceptron networks in identifying process dynamics is investigated via the case study of a U-tube steam generator. The nonlinear response of a representative steam generator is predicted using a neural network and is compared to the response obtained from a sophisticated physical model during both high- and low-power operation. The transient responses compare well, though further research is warranted for training and testing of recurrent neural networks during more severe operational transients and accident scenarios

  11. Optimizing Semantic Pointer Representations for Symbol-Like Processing in Spiking Neural Networks.

    Science.gov (United States)

    Gosmann, Jan; Eliasmith, Chris

    2016-01-01

    The Semantic Pointer Architecture (SPA) is a proposal of specifying the computations and architectural elements needed to account for cognitive functions. By means of the Neural Engineering Framework (NEF) this proposal can be realized in a spiking neural network. However, in any such network each SPA transformation will accumulate noise. By increasing the accuracy of common SPA operations, the overall network performance can be increased considerably. As well, the representations in such networks present a trade-off between being able to represent all possible values and being only able to represent the most likely values, but with high accuracy. We derive a heuristic to find the near-optimal point in this trade-off. This allows us to improve the accuracy of common SPA operations by up to 25 times. Ultimately, it allows for a reduction of neuron number and a more efficient use of both traditional and neuromorphic hardware, which we demonstrate here.

  12. Architecture on Architecture

    DEFF Research Database (Denmark)

    Olesen, Karen

    2016-01-01

    that is not scientific or academic but is more like a latent body of data that we find embedded in existing works of architecture. This information, it is argued, is not limited by the historical context of the work. It can be thought of as a virtual capacity – a reservoir of spatial configurations that can...... correlation between the study of existing architectures and the training of competences to design for present-day realities.......This paper will discuss the challenges faced by architectural education today. It takes as its starting point the double commitment of any school of architecture: on the one hand the task of preserving the particular knowledge that belongs to the discipline of architecture, and on the other hand...

  13. Deep learning with convolutional neural networks: a resource for the control of robotic prosthetic hands via electromyography

    Directory of Open Access Journals (Sweden)

    Manfredo Atzori

    2016-09-01

    Full Text Available Motivation: Natural control methods based on surface electromyography and pattern recognition are promising for hand prosthetics. However, the control robustness offered by scientific research is still not sufficient for many real life applications and commercial prostheses are in the best case capable to offer natural control for only a few movements. Objective: In recent years deep learning revolutionized several fields of machine learning, including computer vision and speech recognition. Our objective is to test its capabilities for the natural control of robotic hands via surface electromyography by providing a baseline on a large number of intact and amputated subjects. Methods: We tested convolutional networks for the classification of an average of 50 hand movements in 67 intact subjects and 11 hand amputated subjects. The simple architecture of the neural network allowed to make several tests in order to evaluate the effect of pre-processing, layer architecture, data augmentation and optimization. The classification results are compared with a set of classical classification methods applied on the same datasets.Results: The classification accuracy obtained with convolutional neural networks using the proposed architecture is higher than the average results obtained with the classical classification methods but lower than the results obtained with the best reference methods in our tests. Significance: The results show that convolutional neural networks with a very simple architecture can produce accuracy comparable to the average classical classification methods. They show that several factors (including pre-processing, the architecture of the net and the optimization parameters can be fundamental for the analysis of surface electromyography data. Finally, the results suggest that deeper and more complex networks may increase dexterous control robustness, thus contributing to bridge the gap between the market and scientific research

  14. An Empirical Investigation of Architectural Prototyping

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak; Hansen, Klaus Marius

    2010-01-01

    Architectural prototyping is the process of using executable code to investigate stakeholders’ software architecture concerns with respect to a system under development. Previous work has established this as a useful and cost-effective way of exploration and learning of the design space of a system...... and in addressing issues regarding quality attributes, architectural risks, and the problem of knowledge transfer and conformance. However, the actual industrial use of architectural prototyping has not been thoroughly researched so far. In this article, we report from three studies of architectural prototyping...... in practice. First, we report findings from an ethnographic study of practicing software architects. Secondly, we report from a focus group on architectural prototyping involving architects from four companies. And, thirdly, we report from a survey study of 20 practicing software architects and software...

  15. Manipulations of Totalitarian Nazi Architecture

    Science.gov (United States)

    Antoszczyszyn, Marek

    2017-10-01

    The paper takes under considerations controversies surrounding German architecture designed during Nazi period between 1933-45. This architecture is commonly criticized for being out of innovation, taste & elementary sense of beauty. Moreover, it has been consequently wiped out from architectural manuals, probably for its undoubted associations with the totalitarian system considered as the most maleficent in the whole history. But in the meantime the architecture of another totalitarian system which appeared to be not less sinister than Nazi one is not stigmatized with such verve. It is Socrealism architecture, developed especially in East Europe & reportedly containing lots of similarities with Nazi architecture. Socrealism totalitarian architecture was never condemned like Nazi one, probably due to politically manipulated propaganda that influenced postwar public opinion. This observation leads to reflection that maybe in the same propaganda way some values of Nazi architecture are still consciously dissembled in order to hide the fact that some rules used by Nazi German architects have been also consciously used after the war. Those are especially manipulations that allegedly Nazi architecture consisted of. The paper provides some definitions around totalitarian manipulations as well as ideological assumptions for their implementation. Finally, the register of confirmed manipulations is provided with use of photo case study.

  16. Function approximation of tasks by neural networks

    International Nuclear Information System (INIS)

    Gougam, L.A.; Chikhi, A.; Mekideche-Chafa, F.

    2008-01-01

    For several years now, neural network models have enjoyed wide popularity, being applied to problems of regression, classification and time series analysis. Neural networks have been recently seen as attractive tools for developing efficient solutions for many real world problems in function approximation. The latter is a very important task in environments where computation has to be based on extracting information from data samples in real world processes. In a previous contribution, we have used a well known simplified architecture to show that it provides a reasonably efficient, practical and robust, multi-frequency analysis. We have investigated the universal approximation theory of neural networks whose transfer functions are: sigmoid (because of biological relevance), Gaussian and two specified families of wavelets. The latter have been found to be more appropriate to use. The aim of the present contribution is therefore to use a m exican hat wavelet a s transfer function to approximate different tasks relevant and inherent to various applications in physics. The results complement and provide new insights into previously published results on this problem

  17. The signs of life in architecture.

    Science.gov (United States)

    Gruber, Petra

    2008-06-01

    Engineers, designers and architects often look to nature for inspiration. The research on 'natural constructions' is aiming at innovation and the improvement of architectural quality. The introduction of life sciences terminology in the context of architecture delivers new perspectives towards innovation in architecture and design. The investigation is focused on the analogies between nature and architecture. Apart from other principles that are found in living nature, an interpretation of the so-called 'signs of life', which characterize living systems, in architecture is presented. Selected architectural projects that have applied specific characteristics of life, whether on purpose or not, will show the state of development in this field and open up future challenges. The survey will include famous built architecture as well as students' design programs, which were carried out under supervision of the author at the Department of Design and Building Construction at the Vienna University of Technology.

  18. Sejarah, Penerapan, dan Analisis Resiko dari Neural Network: Sebuah Tinjauan Pustaka

    Directory of Open Access Journals (Sweden)

    Cristina Cristina

    2018-05-01

    Full Text Available A neural network is a form of artificial intelligence that has the ability to learn, grow, and adapt in a dynamic environment. Neural network began since 1890 because a great American psychologist named William James created the book "Principles of Psycology". James was the first one publish a number of facts related to the structure and function of the brain. The history of neural network development is divided into 4 epochs, the Camelot era, the Depression, the Renaissance, and the Neoconnectiosm era. Neural networks used today are not 100 percent accurate. However, neural networks are still used because of better performance than alternative computing models. The use of neural network consists of pattern recognition, signal analysis, robotics, and expert systems. For risk analysis of the neural network, it is first performed using hazards and operability studies (HAZOPS. Determining the neural network requirements in a good way will help in determining its contribution to system hazards and validating the control or mitigation of any hazards. After completion of the first stage at HAZOPS and the second stage determines the requirements, the next stage is designing. Neural network underwent repeated design-train-test development. At the design stage, the hazard analysis should consider the design aspects of the development, which include neural network architecture, size, intended use, and so on. It will be continued at the implementation stage, test phase, installation and inspection phase, operation phase, and ends at the maintenance stage.

  19. Implementation of a feed-forward artificial neural network in VHDL on FPGA

    NARCIS (Netherlands)

    Dondon, P.; Carvalho, J.; Gardere, R.; Lahalle, P.; Tsenov, G.; Mladenov, V.M.; Reljin, B.; Stankovic, S.

    2014-01-01

    Describing an Artificial Neural Network (ANN) using VHDL allows a further implementation of such a system on FPGA. Indeed, the principal point of using FPGA for ANNs is flexibility that gives it an advantage toward other systems like ASICS which are entirely dedicated to one unique architecture and

  20. Child Maltreatment and Neural Systems Underlying Emotion Regulation.

    Science.gov (United States)

    McLaughlin, Katie A; Peverill, Matthew; Gold, Andrea L; Alves, Sonia; Sheridan, Margaret A

    2015-09-01

    The strong associations between child maltreatment and psychopathology have generated interest in identifying neurodevelopmental processes that are disrupted following maltreatment. Previous research has focused largely on neural response to negative facial emotion. We determined whether child maltreatment was associated with neural responses during passive viewing of negative and positive emotional stimuli and effortful attempts to regulate emotional responses. A total of 42 adolescents aged 13 to 19 years, half with exposure to physical and/or sexual abuse, participated. Blood oxygen level-dependent (BOLD) response was measured during passive viewing of negative and positive emotional stimuli and attempts to modulate emotional responses using cognitive reappraisal. Maltreated adolescents exhibited heightened response in multiple nodes of the salience network, including amygdala, putamen, and anterior insula, to negative relative to neutral stimuli. During attempts to decrease responses to negative stimuli relative to passive viewing, maltreatment was associated with greater recruitment of superior frontal gyrus, dorsal anterior cingulate cortex, and frontal pole; adolescents with and without maltreatment down-regulated amygdala response to a similar degree. No associations were observed between maltreatment and neural response to positive emotional stimuli during passive viewing or effortful regulation. Child maltreatment heightens the salience of negative emotional stimuli. Although maltreated adolescents modulate amygdala responses to negative cues to a degree similar to that of non-maltreated youths, they use regions involved in effortful control to a greater degree to do so, potentially because greater effort is required to modulate heightened amygdala responses. These findings are promising, given the centrality of cognitive restructuring in trauma-focused treatments for children. Copyright © 2015 American Academy of Child and Adolescent Psychiatry

  1. Drive reinforcement neural networks for reactor control. Final report

    International Nuclear Information System (INIS)

    Williams, J.G.; Jouse, W.C.

    1995-01-01

    In view of the loss of the third year funding, the scope of the project goals has been revised. The revision in project scope no longer allows for the detailed modeling of the EBR-11 start-up task that was originally envisaged. The authors are continuing, however, to model the control of the rapid power ascent of the University of Arizona TRIGA reactor using a model-based controller and using a drive reinforcement neural network. These will be combined during the concluding period of the project into a hierarchical control architecture. In addition, the modeling of a PWR feedwater heater has continued, and an autonomous fault-tolerant software architecture for its control has been proposed

  2. Standard cell-based implementation of a digital optoelectronic neural-network hardware.

    Science.gov (United States)

    Maier, K D; Beckstein, C; Blickhan, R; Erhard, W

    2001-03-10

    A standard cell-based implementation of a digital optoelectronic neural-network architecture is presented. The overall structure of the multilayer perceptron network that was used, the optoelectronic interconnection system between the layers, and all components required in each layer are defined. The design process from VHDL-based modeling from synthesis and partly automatic placing and routing to the final editing of one layer of the circuit of the multilayer perceptrons are described. A suitable approach for the standard cell-based design of optoelectronic systems is presented, and shortcomings of the design tool that was used are pointed out. The layout for the microelectronic circuit of one layer in a multilayer perceptron neural network with a performance potential 1 magnitude higher than neural networks that are purely electronic based has been successfully designed.

  3. Classification of mass and normal breast tissue: A convolution neural network classifier with spatial domain and texture images

    International Nuclear Information System (INIS)

    Sahiner, B.; Chan, H.P.; Petrick, N.; Helvie, M.A.; Adler, D.D.; Goodsitt, M.M.; Wei, D.

    1996-01-01

    The authors investigated the classification of regions of interest (ROI's) on mammograms as either mass or normal tissue using a convolution neural network (CNN). A CNN is a back-propagation neural network with two-dimensional (2-D) weight kernels that operate on images. A generalized, fast and stable implementation of the CNN was developed. The input images to the CNN were obtained form the ROI's using two techniques. The first technique employed averaging and subsampling. The second technique employed texture feature extraction methods applied to small subregions inside the ROI. Features computed over different subregions were arranged as texture images, which were subsequently used as CNN inputs. The effects of CNN architecture and texture feature parameters on classification accuracy were studied. Receiver operating characteristic (ROC) methodology was used to evaluate the classification accuracy. A data set consisting of 168 ROI's containing biopsy-proven masses and 504 ROI's containing normal breast tissue was extracted from 168 mammograms by radiologists experienced in mammography. This data set was used for training and testing the CNN. With the best combination of CNN architecture and texture feature parameters, the area under the test ROC curve reached 0.87, which corresponded to a true-positive fraction of 90% at a false positive fraction of 31%. The results demonstrate the feasibility of using a CNN for classification of masses and normal tissue on mammograms

  4. Application of artificial neural networks to evaluate weld defects of nuclear components

    International Nuclear Information System (INIS)

    Amin, E.S.

    2007-01-01

    Artificial neural networks (ANNs) are computational representations based on the biological neural architecture of the brain. ANNs have been successfully applied to a wide range of engineering and scientific applications, such as signal, image processing and data analysis. Although Radiographic testing is widely used for welding defects, it is unsuccessful in identifying some welding defects because of the nature of image formation and quality. Neoteric algorithms have been used for the purpose of weld defects identifications in radiographic images to replace the expert knowledge. The application of artificial neural networks in noise detection of radiographic films is used. Radial Basis (RB) and learning vector quantization (LVQ) were applied. The method shows good performance in weld defects recognition and classification problems.

  5. The application of artificial neural networks to TLD dose algorithm

    International Nuclear Information System (INIS)

    Moscovitch, M.

    1997-01-01

    We review the application of feed forward neural networks to multi element thermoluminescence dosimetry (TLD) dose algorithm development. A Neural Network is an information processing method inspired by the biological nervous system. A dose algorithm based on a neural network is a fundamentally different approach from conventional algorithms, as it has the capability to learn from its own experience. The neural network algorithm is shown the expected dose values (output) associated with a given response of a multi-element dosimeter (input) many times.The algorithm, being trained that way, eventually is able to produce its own unique solution to similar (but not exactly the same) dose calculation problems. For personnel dosimetry, the output consists of the desired dose components: deep dose, shallow dose, and eye dose. The input consists of the TL data obtained from the readout of a multi-element dosimeter. For this application, a neural network architecture was developed based on the concept of functional links network (FLN). The FLN concept allowed an increase in the dimensionality of the input space and construction of a neural network without any hidden layers. This simplifies the problem and results in a relatively simple and reliable dose calculation algorithm. Overall, the neural network dose algorithm approach has been shown to significantly improve the precision and accuracy of dose calculations. (authors)

  6. New backpropagation algorithm with type-2 fuzzy weights for neural networks

    CERN Document Server

    Gaxiola, Fernando; Valdez, Fevrier

    2016-01-01

    In this book a neural network learning method with type-2 fuzzy weight adjustment is proposed. The mathematical analysis of the proposed learning method architecture and the adaptation of type-2 fuzzy weights are presented. The proposed method is based on research of recent methods that handle weight adaptation and especially fuzzy weights. The internal operation of the neuron is changed to work with two internal calculations for the activation function to obtain two results as outputs of the proposed method. Simulation results and a comparative study among monolithic neural networks, neural network with type-1 fuzzy weights and neural network with type-2 fuzzy weights are presented to illustrate the advantages of the proposed method. The proposed approach is based on recent methods that handle adaptation of weights using fuzzy logic of type-1 and type-2. The proposed approach is applied to a cases of prediction for the Mackey-Glass (for ô=17) and Dow-Jones time series, and recognition of person with iris bi...

  7. Connecting Neurons to a Mobile Robot: An In Vitro Bidirectional Neural Interface

    Directory of Open Access Journals (Sweden)

    A. Novellino

    2007-01-01

    Full Text Available One of the key properties of intelligent behaviors is the capability to learn and adapt to changing environmental conditions. These features are the result of the continuous and intense interaction of the brain with the external world, mediated by the body. For this reason x201C;embodiment” represents an innovative and very suitable experimental paradigm when studying the neural processes underlying learning new behaviors and adapting to unpredicted situations. To this purpose, we developed a novel bidirectional neural interface. We interconnected in vitro neurons, extracted from rat embryos and plated on a microelectrode array (MEA, to external devices, thus allowing real-time closed-loop interaction. The novelty of this experimental approach entails the necessity to explore different computational schemes and experimental hypotheses. In this paper, we present an open, scalable architecture, which allows fast prototyping of different modules and where coding and decoding schemes and different experimental configurations can be tested. This hybrid system can be used for studying the computational properties and information coding in biological neuronal networks with far-reaching implications for the future development of advanced neuroprostheses.

  8. Backpropagation architecture optimization and an application in nuclear power plant diagnostics

    International Nuclear Information System (INIS)

    Basu, A.; Bartlett, E.B.

    1993-01-01

    This paper presents a Dynamic Node Architecture (DNA) scheme to optimize the architecture of backpropagation Artificial Neural Networks (ANNs). This network scheme is used to develop an ANN based diagnostic adviser capable of identifying the operating status of a nuclear power plant. Specifically, a root network is trained to diagnose if the plant is in a normal operating condition or not. In the event of an abnormal condition, another classifier network is trained to recognize the particular transient taking place. These networks are trained using plant instrumentation data gathered during simulations of the various transients and normal operating conditions at, the Iowa Electric Light and Power Company's Duane Arnold Energy Center (DAEC) operator training simulator

  9. Backpropagation architecture optimization and an application in nuclear power plant diagnostics

    International Nuclear Information System (INIS)

    Basu, A.; Bartlett, E.B.

    1993-01-01

    This paper presents a Dynamic Node Architecture (DNA) scheme to optimize the architecture of backpropagation Artificial Neural Networks (ANNs). This network scheme is used to develop an ANN based diagnostic adviser capable of identifying the operating status of a nuclear power plant. Specifically, a ''root'' network is trained to diagnose if the plant is in a normal operating condition or not. In the event of an abnormal condition, and other ''classifier'' network is trained to recognize the particular transient taking place. these networks are trained using plant instrumentation data gathered during simulations of the various transients and normal operating conditions at the Iowa Electric Light and Power Company's Duane Arnold Energy Center (DAEC) operator training simulator

  10. High-performance reconfigurable hardware architecture for restricted Boltzmann machines.

    Science.gov (United States)

    Ly, Daniel Le; Chow, Paul

    2010-11-01

    Despite the popularity and success of neural networks in research, the number of resulting commercial or industrial applications has been limited. A primary cause for this lack of adoption is that neural networks are usually implemented as software running on general-purpose processors. Hence, a hardware implementation that can exploit the inherent parallelism in neural networks is desired. This paper investigates how the restricted Boltzmann machine (RBM), which is a popular type of neural network, can be mapped to a high-performance hardware architecture on field-programmable gate array (FPGA) platforms. The proposed modular framework is designed to reduce the time complexity of the computations through heavily customized hardware engines. A method to partition large RBMs into smaller congruent components is also presented, allowing the distribution of one RBM across multiple FPGA resources. The framework is tested on a platform of four Xilinx Virtex II-Pro XC2VP70 FPGAs running at 100 MHz through a variety of different configurations. The maximum performance was obtained by instantiating an RBM of 256 × 256 nodes distributed across four FPGAs, which resulted in a computational speed of 3.13 billion connection-updates-per-second and a speedup of 145-fold over an optimized C program running on a 2.8-GHz Intel processor.

  11. A Behaviour-Based Architecture for Mapless Navigation Using Vision

    Directory of Open Access Journals (Sweden)

    Mehmet Serdar Guzel

    2012-04-01

    Full Text Available Autonomous robots operating in an unknown and uncertain environment must be able to cope with dynamic changes to that environment. For a mobile robot in a cluttered environment to navigate successfully to a goal while avoiding obstacles is a challenging problem. This paper presents a new behaviour-based architecture design for mapless navigation. The architecture is composed of several modules and each module generates behaviours. A novel method, inspired from a visual homing strategy, is adapted to a monocular vision-based system to overcome goal-based navigation problems. A neural network-based obstacle avoidance strategy is designed using a 2-D scanning laser. To evaluate the performance of the proposed architecture, the system has been tested using Microsoft Robotics Studio (MRS, which is a very powerful 3D simulation environment. In addition, real experiments to guide a Pioneer 3-DX mobile robot, equipped with a pan-tilt-zoom camera in a cluttered environment are presented. The analysis of the results allows us to validate the proposed behaviour-based navigation strategy.

  12. Neural and computational processes underlying dynamic changes in self-esteem

    Science.gov (United States)

    Rutledge, Robb B; Moutoussis, Michael; Dolan, Raymond J

    2017-01-01

    Self-esteem is shaped by the appraisals we receive from others. Here, we characterize neural and computational mechanisms underlying this form of social influence. We introduce a computational model that captures fluctuations in self-esteem engendered by prediction errors that quantify the difference between expected and received social feedback. Using functional MRI, we show these social prediction errors correlate with activity in ventral striatum/subgenual anterior cingulate cortex, while updates in self-esteem resulting from these errors co-varied with activity in ventromedial prefrontal cortex (vmPFC). We linked computational parameters to psychiatric symptoms using canonical correlation analysis to identify an ‘interpersonal vulnerability’ dimension. Vulnerability modulated the expression of prediction error responses in anterior insula and insula-vmPFC connectivity during self-esteem updates. Our findings indicate that updating of self-evaluative beliefs relies on learning mechanisms akin to those used in learning about others. Enhanced insula-vmPFC connectivity during updating of those beliefs may represent a marker for psychiatric vulnerability. PMID:29061228

  13. Neural and computational processes underlying dynamic changes in self-esteem.

    Science.gov (United States)

    Will, Geert-Jan; Rutledge, Robb B; Moutoussis, Michael; Dolan, Raymond J

    2017-10-24

    Self-esteem is shaped by the appraisals we receive from others. Here, we characterize neural and computational mechanisms underlying this form of social influence. We introduce a computational model that captures fluctuations in self-esteem engendered by prediction errors that quantify the difference between expected and received social feedback. Using functional MRI, we show these social prediction errors correlate with activity in ventral striatum/subgenual anterior cingulate cortex, while updates in self-esteem resulting from these errors co-varied with activity in ventromedial prefrontal cortex (vmPFC). We linked computational parameters to psychiatric symptoms using canonical correlation analysis to identify an 'interpersonal vulnerability' dimension. Vulnerability modulated the expression of prediction error responses in anterior insula and insula-vmPFC connectivity during self-esteem updates. Our findings indicate that updating of self-evaluative beliefs relies on learning mechanisms akin to those used in learning about others. Enhanced insula-vmPFC connectivity during updating of those beliefs may represent a marker for psychiatric vulnerability.

  14. Nonlinear neural network for hemodynamic model state and input estimation using fMRI data

    KAUST Repository

    Karam, Ayman M.

    2014-11-01

    Originally inspired by biological neural networks, artificial neural networks (ANNs) are powerful mathematical tools that can solve complex nonlinear problems such as filtering, classification, prediction and more. This paper demonstrates the first successful implementation of ANN, specifically nonlinear autoregressive with exogenous input (NARX) networks, to estimate the hemodynamic states and neural activity from simulated and measured real blood oxygenation level dependent (BOLD) signals. Blocked and event-related BOLD data are used to test the algorithm on real experiments. The proposed method is accurate and robust even in the presence of signal noise and it does not depend on sampling interval. Moreover, the structure of the NARX networks is optimized to yield the best estimate with minimal network architecture. The results of the estimated neural activity are also discussed in terms of their potential use.

  15. Neural circuit architecture defects in a Drosophila model of Fragile X syndrome are alleviated by minocycline treatment and genetic removal of matrix metalloproteinase

    Directory of Open Access Journals (Sweden)

    Saul S. Siller

    2011-09-01

    Fragile X syndrome (FXS, caused by loss of the fragile X mental retardation 1 (FMR1 product (FMRP, is the most common cause of inherited intellectual disability and autism spectrum disorders. FXS patients suffer multiple behavioral symptoms, including hyperactivity, disrupted circadian cycles, and learning and memory deficits. Recently, a study in the mouse FXS model showed that the tetracycline derivative minocycline effectively remediates the disease state via a proposed matrix metalloproteinase (MMP inhibition mechanism. Here, we use the well-characterized Drosophila FXS model to assess the effects of minocycline treatment on multiple neural circuit morphological defects and to investigate the MMP hypothesis. We first treat Drosophila Fmr1 (dfmr1 null animals with minocycline to assay the effects on mutant synaptic architecture in three disparate locations: the neuromuscular junction (NMJ, clock neurons in the circadian activity circuit and Kenyon cells in the mushroom body learning and memory center. We find that minocycline effectively restores normal synaptic structure in all three circuits, promising therapeutic potential for FXS treatment. We next tested the MMP hypothesis by assaying the effects of overexpressing the sole Drosophila tissue inhibitor of MMP (TIMP in dfmr1 null mutants. We find that TIMP overexpression effectively prevents defects in the NMJ synaptic architecture in dfmr1 mutants. Moreover, co-removal of dfmr1 similarly rescues TIMP overexpression phenotypes, including cellular tracheal defects and lethality. To further test the MMP hypothesis, we generated dfmr1;mmp1 double null mutants. Null mmp1 mutants are 100% lethal and display cellular tracheal defects, but co-removal of dfmr1 allows adult viability and prevents tracheal defects. Conversely, co-removal of mmp1 ameliorates the NMJ synaptic architecture defects in dfmr1 null mutants, despite the lack of detectable difference in MMP1 expression or gelatinase activity between the single

  16. Neural mechanisms of human perceptual choice under focused and divided attention

    Science.gov (United States)

    Wyart, Valentin; Myers, Nicholas E.; Summerfield, Christopher

    2015-01-01

    Perceptual decisions occur after evaluation and integration of momentary sensory inputs, and dividing attention between spatially disparate sources of information impairs decision performance. However, it remains unknown whether dividing attention degrades the precision of sensory signals, precludes their conversion into decision signals, or dampens the integration of decision information towards an appropriate response. Here we recorded human electroencephalographic (EEG) activity whilst participants categorised one of two simultaneous and independent streams of visual gratings according to their average tilt. By analyzing trial-by-trial correlations between EEG activity and the information offered by each sample, we obtained converging behavioural and neural evidence that dividing attention between left and right visual fields does not dampen the encoding of sensory or decision information. Under divided attention, momentary decision information from both visual streams was encoded in slow parietal signals without interference but was lost downstream during their integration as reflected in motor mu- and beta-band (10–30 Hz) signals, resulting in a ‘leaky’ accumulation process which conferred greater behavioural influence to more recent samples. By contrast, sensory inputs that were explicitly cued as irrelevant were not converted into decision signals. These findings reveal that a late cognitive bottleneck on information integration limits decision performance under divided attention, and place new capacity constraints on decision-theoretic models of information integration under cognitive load. PMID:25716848

  17. Proceedings of the workshop cum symposium on applications of neural networks in nuclear science and industry

    International Nuclear Information System (INIS)

    1993-01-01

    The Workshop cum Symposium on Application of Neural Networks in Nuclear Science and Industry was held at Bombay during November 24-26. 1993. The past decade has seen many important advances in the design and technology of artificial neural networks in research and industry. Neural networks is an interdisciplinary field covering a broad spectrum of applications in surveillance, diagnosis of nuclear power plants, nuclear spectroscopy, speech and written text recognition, robotic control, signal processing etc. The objective of the symposium was to promote awareness of advances in neural network research and applications. It was also aimed at conducting the review of the present status and giving direction for future technological developments. Contributed papers have been organized into the following groups: a) neural network architectures, learning algorithms and modelling, b) computer vision and image processing, c) signal processing, d) neural networks and fuzzy systems, e) nuclear applications and f) neural networks and allied applications. Papers relevant to INIS are indexed separately. (M.K.V.)

  18. Robust stability analysis of switched Hopfield neural networks with time-varying delay under uncertainty

    International Nuclear Information System (INIS)

    Huang He; Qu Yuzhong; Li Hanxiong

    2005-01-01

    With the development of intelligent control, switched systems have been widely studied. Here we try to introduce some ideas of the switched systems into the field of neural networks. In this Letter, a class of switched Hopfield neural networks with time-varying delay is investigated. The parametric uncertainty is considered and assumed to be norm bounded. Firstly, the mathematical model of the switched Hopfield neural networks is established in which a set of Hopfield neural networks are used as the individual subsystems and an arbitrary switching rule is assumed; Secondly, robust stability analysis for such switched Hopfield neural networks is addressed based on the Lyapunov-Krasovskii approach. Some criteria are given to guarantee the switched Hopfield neural networks to be globally exponentially stable for all admissible parametric uncertainties. These conditions are expressed in terms of some strict linear matrix inequalities (LMIs). Finally, a numerical example is provided to illustrate our results

  19. Characterizing root response phenotypes by neural network analysis

    OpenAIRE

    Hatzig, Sarah V.; Schiessl, Sarah; Stahl, Andreas; Snowdon, Rod J.

    2015-01-01

    Roots play an immediate role as the interface for water acquisition. To improve sustainability in low-water environments, breeders of major crops must therefore pay closer attention to advantageous root phenotypes; however, the complexity of root architecture in response to stress can be difficult to quantify. Here, the Sholl method, an established technique from neurobiology used for the characterization of neural network anatomy, was adapted to more adequately describe root responses to osm...

  20. High-Performance Neural Networks for Visual Object Classification

    OpenAIRE

    Cireşan, Dan C.; Meier, Ueli; Masci, Jonathan; Gambardella, Luca M.; Schmidhuber, Jürgen

    2011-01-01

    We present a fast, fully parameterizable GPU implementation of Convolutional Neural Network variants. Our feature extractors are neither carefully designed nor pre-wired, but rather learned in a supervised way. Our deep hierarchical architectures achieve the best published results on benchmarks for object classification (NORB, CIFAR10) and handwritten digit recognition (MNIST), with error rates of 2.53%, 19.51%, 0.35%, respectively. Deep nets trained by simple back-propagation perform better ...

  1. Modeling and control of magnetorheological fluid dampers using neural networks

    Science.gov (United States)

    Wang, D. H.; Liao, W. H.

    2005-02-01

    Due to the inherent nonlinear nature of magnetorheological (MR) fluid dampers, one of the challenging aspects for utilizing these devices to achieve high system performance is the development of accurate models and control algorithms that can take advantage of their unique characteristics. In this paper, the direct identification and inverse dynamic modeling for MR fluid dampers using feedforward and recurrent neural networks are studied. The trained direct identification neural network model can be used to predict the damping force of the MR fluid damper on line, on the basis of the dynamic responses across the MR fluid damper and the command voltage, and the inverse dynamic neural network model can be used to generate the command voltage according to the desired damping force through supervised learning. The architectures and the learning methods of the dynamic neural network models and inverse neural network models for MR fluid dampers are presented, and some simulation results are discussed. Finally, the trained neural network models are applied to predict and control the damping force of the MR fluid damper. Moreover, validation methods for the neural network models developed are proposed and used to evaluate their performance. Validation results with different data sets indicate that the proposed direct identification dynamic model using the recurrent neural network can be used to predict the damping force accurately and the inverse identification dynamic model using the recurrent neural network can act as a damper controller to generate the command voltage when the MR fluid damper is used in a semi-active mode.

  2. Multi-step wind speed forecasting based on a hybrid forecasting architecture and an improved bat algorithm

    International Nuclear Information System (INIS)

    Xiao, Liye; Qian, Feng; Shao, Wei

    2017-01-01

    Highlights: • Propose a hybrid architecture based on a modified bat algorithm for multi-step wind speed forecasting. • Improve the accuracy of multi-step wind speed forecasting. • Modify bat algorithm with CG to improve optimized performance. - Abstract: As one of the most promising sustainable energy sources, wind energy plays an important role in energy development because of its cleanliness without causing pollution. Generally, wind speed forecasting, which has an essential influence on wind power systems, is regarded as a challenging task. Analyses based on single-step wind speed forecasting have been widely used, but their results are insufficient in ensuring the reliability and controllability of wind power systems. In this paper, a new forecasting architecture based on decomposing algorithms and modified neural networks is successfully developed for multi-step wind speed forecasting. Four different hybrid models are contained in this architecture, and to further improve the forecasting performance, a modified bat algorithm (BA) with the conjugate gradient (CG) method is developed to optimize the initial weights between layers and thresholds of the hidden layer of neural networks. To investigate the forecasting abilities of the four models, the wind speed data collected from four different wind power stations in Penglai, China, were used as a case study. The numerical experiments showed that the hybrid model including the singular spectrum analysis and general regression neural network with CG-BA (SSA-CG-BA-GRNN) achieved the most accurate forecasting results in one-step to three-step wind speed forecasting.

  3. Semantic Segmentation of Convolutional Neural Network for Supervised Classification of Multispectral Remote Sensing

    Science.gov (United States)

    Xue, L.; Liu, C.; Wu, Y.; Li, H.

    2018-04-01

    Semantic segmentation is a fundamental research in remote sensing image processing. Because of the complex maritime environment, the classification of roads, vegetation, buildings and water from remote Sensing Imagery is a challenging task. Although the neural network has achieved excellent performance in semantic segmentation in the last years, there are a few of works using CNN for ground object segmentation and the results could be further improved. This paper used convolution neural network named U-Net, its structure has a contracting path and an expansive path to get high resolution output. In the network , We added BN layers, which is more conducive to the reverse pass. Moreover, after upsampling convolution , we add dropout layers to prevent overfitting. They are promoted to get more precise segmentation results. To verify this network architecture, we used a Kaggle dataset. Experimental results show that U-Net achieved good performance compared with other architectures, especially in high-resolution remote sensing imagery.

  4. DSP Architecture Design Essentials

    CERN Document Server

    Marković, Dejan

    2012-01-01

    In DSP Architecture Design Essentials, authors Dejan Marković and Robert W. Brodersen cover a key subject for the successful realization of DSP algorithms for communications, multimedia, and healthcare applications. The book addresses the need for DSP architecture design that maps advanced DSP algorithms to hardware in the most power- and area-efficient way. The key feature of this text is a design methodology based on a high-level design model that leads to hardware implementation with minimum power and area. The methodology includes algorithm-level considerations such as automated word-length reduction and intrinsic data properties that can be leveraged to reduce hardware complexity. From a high-level data-flow graph model, an architecture exploration methodology based on linear programming is used to create an array of architectural solutions tailored to the underlying hardware technology. The book is supplemented with online material: bibliography, design examples, CAD tutorials and custom software.

  5. Neural signatures of social conformity: A coordinate-based activation likelihood estimation meta-analysis of functional brain imaging studies.

    Science.gov (United States)

    Wu, Haiyan; Luo, Yi; Feng, Chunliang

    2016-12-01

    People often align their behaviors with group opinions, known as social conformity. Many neuroscience studies have explored the neuropsychological mechanisms underlying social conformity. Here we employed a coordinate-based meta-analysis on neuroimaging studies of social conformity with the purpose to reveal the convergence of the underlying neural architecture. We identified a convergence of reported activation foci in regions associated with normative decision-making, including ventral striatum (VS), dorsal posterior medial frontal cortex (dorsal pMFC), and anterior insula (AI). Specifically, consistent deactivation of VS and activation of dorsal pMFC and AI are identified when people's responses deviate from group opinions. In addition, the deviation-related responses in dorsal pMFC predict people's conforming behavioral adjustments. These are consistent with current models that disagreement with others might evoke "error" signals, cognitive imbalance, and/or aversive feelings, which are plausibly detected in these brain regions as control signals to facilitate subsequent conforming behaviors. Finally, group opinions result in altered neural correlates of valuation, manifested as stronger responses of VS to stimuli endorsed than disliked by others. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Advanced approach to numerical forecasting using artificial neural networks

    Directory of Open Access Journals (Sweden)

    Michael Štencl

    2009-01-01

    Full Text Available Current global market is driven by many factors, such as the information age, the time and amount of information distributed by many data channels it is practically impossible analyze all kinds of incoming information flows and transform them to data with classical methods. New requirements could be met by using other methods. Once trained on patterns artificial neural networks can be used for forecasting and they are able to work with extremely big data sets in reasonable time. The patterns used for learning process are samples of past data. This paper uses Radial Basis Functions neural network in comparison with Multi Layer Perceptron network with Back-propagation learning algorithm on prediction task. The task works with simplified numerical time series and includes forty observations with prediction for next five observations. The main topic of the article is the identification of the main differences between used neural networks architectures together with numerical forecasting. Detected differences then verify on practical comparative example.

  7. A Streaming PCA VLSI Chip for Neural Data Compression.

    Science.gov (United States)

    Wu, Tong; Zhao, Wenfeng; Guo, Hongsun; Lim, Hubert H; Yang, Zhi

    2017-12-01

    Neural recording system miniaturization and integration with low-power wireless technologies require compressing neural data before transmission. Feature extraction is a procedure to represent data in a low-dimensional space; its integration into a recording chip can be an efficient approach to compress neural data. In this paper, we propose a streaming principal component analysis algorithm and its microchip implementation to compress multichannel local field potential (LFP) and spike data. The circuits have been designed in a 65-nm CMOS technology and occupy a silicon area of 0.06 mm. Throughout the experiments, the chip compresses LFPs by 10 at the expense of as low as 1% reconstruction errors and 144-nW/channel power consumption; for spikes, the achieved compression ratio is 25 with 8% reconstruction errors and 3.05-W/channel power consumption. In addition, the algorithm and its hardware architecture can swiftly adapt to nonstationary spiking activities, which enables efficient hardware sharing among multiple channels to support a high-channel count recorder.

  8. Predicting physical time series using dynamic ridge polynomial neural networks.

    Directory of Open Access Journals (Sweden)

    Dhiya Al-Jumeily

    Full Text Available Forecasting naturally occurring phenomena is a common problem in many domains of science, and this has been addressed and investigated by many scientists. The importance of time series prediction stems from the fact that it has wide range of applications, including control systems, engineering processes, environmental systems and economics. From the knowledge of some aspects of the previous behaviour of the system, the aim of the prediction process is to determine or predict its future behaviour. In this paper, we consider a novel application of a higher order polynomial neural network architecture called Dynamic Ridge Polynomial Neural Network that combines the properties of higher order and recurrent neural networks for the prediction of physical time series. In this study, four types of signals have been used, which are; The Lorenz attractor, mean value of the AE index, sunspot number, and heat wave temperature. The simulation results showed good improvements in terms of the signal to noise ratio in comparison to a number of higher order and feedforward neural networks in comparison to the benchmarked techniques.

  9. Modulated error diffusion CGHs for neural nets

    Science.gov (United States)

    Vermeulen, Pieter J. E.; Casasent, David P.

    1990-05-01

    New modulated error diffusion CGHs (computer generated holograms) for optical computing are considered. Specific attention is given to their use in optical matrix-vector, associative processor, neural net and optical interconnection architectures. We consider lensless CGH systems (many CGHs use an external Fourier transform (FT) lens), the Fresnel sampling requirements, the effects of finite CGH apertures (sample and hold inputs), dot size correction (for laser recorders), and new applications for this novel encoding method (that devotes attention to quantization noise effects).

  10. Predicting Electrocardiogram and Arterial Blood Pressure Waveforms with Different Echo State Network Architectures

    Science.gov (United States)

    2014-11-01

    Predicting Electrocardiogram and Arterial Blood Pressure Waveforms with Different Echo State Network Architectures Allan Fong, MS1,3, Ranjeev...the medical staff in Intensive Care Units. The ability to predict electrocardiogram and arterial blood pressure waveforms can potentially help the...type of neural network for mining, understanding, and predicting electrocardiogram and arterial blood pressure waveforms. Several network

  11. Culture in the mind's mirror: how anthropology and neuroscience can inform a model of the neural substrate for cultural imitative learning.

    Science.gov (United States)

    Losin, Elizabeth A Reynolds; Dapretto, Mirella; Iacoboni, Marco

    2009-01-01

    Cultural neuroscience, the study of how cultural experience shapes the brain, is an emerging subdiscipline in the neurosciences. Yet, a foundational question to the study of culture and the brain remains neglected by neuroscientific inquiry: "How does cultural information get into the brain in the first place?" Fortunately, the tools needed to explore the neural architecture of cultural learning - anthropological theories and cognitive neuroscience methodologies - already exist; they are merely separated by disciplinary boundaries. Here we review anthropological theories of cultural learning derived from fieldwork and modeling; since cultural learning theory suggests that sophisticated imitation abilities are at the core of human cultural learning, we focus our review on cultural imitative learning. Accordingly we proceed to discuss the neural underpinnings of imitation and other mechanisms important for cultural learning: learning biases, mental state attribution, and reinforcement learning. Using cultural neuroscience theory and cognitive neuroscience research as our guides, we then propose a preliminary model of the neural architecture of cultural learning. Finally, we discuss future studies needed to test this model and fully explore and explain the neural underpinnings of cultural imitative learning.

  12. Data architecture from zen to reality

    CERN Document Server

    Tupper, Charles D

    2011-01-01

    Data Architecture: From Zen to Reality explains the principles underlying data architecture, how data evolves with organizations, and the challenges organizations face in structuring and managing their data. It also discusses proven methods and technologies to solve the complex issues dealing with data. The book uses a holistic approach to the field of data architecture by covering the various applied areas of data, including data modelling and data model management, data quality , data governance, enterprise information management, database design, data warehousing, and warehouse design. This book is a core resource for anyone emplacing, customizing or aligning data management systems, taking the Zen-like idea of data architecture to an attainable reality.

  13. A processing architecture for associative short-term memory in electronic noses

    Science.gov (United States)

    Pioggia, G.; Ferro, M.; Di Francesco, F.; DeRossi, D.

    2006-11-01

    Electronic nose (e-nose) architectures usually consist of several modules that process various tasks such as control, data acquisition, data filtering, feature selection and pattern analysis. Heterogeneous techniques derived from chemometrics, neural networks, and fuzzy rules used to implement such tasks may lead to issues concerning module interconnection and cooperation. Moreover, a new learning phase is mandatory once new measurements have been added to the dataset, thus causing changes in the previously derived model. Consequently, if a loss in the previous learning occurs (catastrophic interference), real-time applications of e-noses are limited. To overcome these problems this paper presents an architecture for dynamic and efficient management of multi-transducer data processing techniques and for saving an associative short-term memory of the previously learned model. The architecture implements an artificial model of a hippocampus-based working memory, enabling the system to be ready for real-time applications. Starting from the base models available in the architecture core, dedicated models for neurons, maps and connections were tailored to an artificial olfactory system devoted to analysing olive oil. In order to verify the ability of the processing architecture in associative and short-term memory, a paired-associate learning test was applied. The avoidance of catastrophic interference was observed.

  14. One-dimensional model of cable-in-conduit superconductors under cyclic loading using artificial neural networks

    International Nuclear Information System (INIS)

    Lefik, M.; Schrefler, B.A.

    2002-01-01

    An artificial neural network with two hidden layers is trained to define a mechanical constitutive relation for superconducting cable under transverse cyclic loading. The training is performed using a set of experimental data. The behaviour of the cable is strongly non-linear. Irreversible phenomena result with complicated loops of hysteresis. The performance of the ANN, which is applied as a tool for storage, interpolation and interpretation of experimental data is investigated, both from numerical, as well as from physical viewpoints

  15. Deep Learning with Convolutional Neural Networks Applied to Electromyography Data: A Resource for the Classification of Movements for Prosthetic Hands.

    Science.gov (United States)

    Atzori, Manfredo; Cognolato, Matteo; Müller, Henning

    2016-01-01

    Natural control methods based on surface electromyography (sEMG) and pattern recognition are promising for hand prosthetics. However, the control robustness offered by scientific research is still not sufficient for many real life applications, and commercial prostheses are capable of offering natural control for only a few movements. In recent years deep learning revolutionized several fields of machine learning, including computer vision and speech recognition. Our objective is to test its methods for natural control of robotic hands via sEMG using a large number of intact subjects and amputees. We tested convolutional networks for the classification of an average of 50 hand movements in 67 intact subjects and 11 transradial amputees. The simple architecture of the neural network allowed to make several tests in order to evaluate the effect of pre-processing, layer architecture, data augmentation and optimization. The classification results are compared with a set of classical classification methods applied on the same datasets. The classification accuracy obtained with convolutional neural networks using the proposed architecture is higher than the average results obtained with the classical classification methods, but lower than the results obtained with the best reference methods in our tests. The results show that convolutional neural networks with a very simple architecture can produce accurate results comparable to the average classical classification methods. They show that several factors (including pre-processing, the architecture of the net and the optimization parameters) can be fundamental for the analysis of sEMG data. Larger networks can achieve higher accuracy on computer vision and object recognition tasks. This fact suggests that it may be interesting to evaluate if larger networks can increase sEMG classification accuracy too.

  16. Deep Learning with Convolutional Neural Networks Applied to Electromyography Data: A Resource for the Classification of Movements for Prosthetic Hands

    Science.gov (United States)

    Atzori, Manfredo; Cognolato, Matteo; Müller, Henning

    2016-01-01

    Natural control methods based on surface electromyography (sEMG) and pattern recognition are promising for hand prosthetics. However, the control robustness offered by scientific research is still not sufficient for many real life applications, and commercial prostheses are capable of offering natural control for only a few movements. In recent years deep learning revolutionized several fields of machine learning, including computer vision and speech recognition. Our objective is to test its methods for natural control of robotic hands via sEMG using a large number of intact subjects and amputees. We tested convolutional networks for the classification of an average of 50 hand movements in 67 intact subjects and 11 transradial amputees. The simple architecture of the neural network allowed to make several tests in order to evaluate the effect of pre-processing, layer architecture, data augmentation and optimization. The classification results are compared with a set of classical classification methods applied on the same datasets. The classification accuracy obtained with convolutional neural networks using the proposed architecture is higher than the average results obtained with the classical classification methods, but lower than the results obtained with the best reference methods in our tests. The results show that convolutional neural networks with a very simple architecture can produce accurate results comparable to the average classical classification methods. They show that several factors (including pre-processing, the architecture of the net and the optimization parameters) can be fundamental for the analysis of sEMG data. Larger networks can achieve higher accuracy on computer vision and object recognition tasks. This fact suggests that it may be interesting to evaluate if larger networks can increase sEMG classification accuracy too. PMID:27656140

  17. Architectural Creation of Light

    DEFF Research Database (Denmark)

    Bülow, Katja

    2015-01-01

    Bidraget "Architectural Creation of Light" indgår sammen med 108 andre bidrag i bogen "You Say Light, I Think Shadow". Bogens indhold undersøger: "Hvad er lys". I dette bidrag besvares spørgsmålet gennem iagttagelser af arkitektstuderendes undersøgelser af lyset i deres arbejdsmodeller i...

  18. A neural network model of ventriloquism effect and aftereffect.

    Science.gov (United States)

    Magosso, Elisa; Cuppini, Cristiano; Ursino, Mauro

    2012-01-01

    Presenting simultaneous but spatially discrepant visual and auditory stimuli induces a perceptual translocation of the sound towards the visual input, the ventriloquism effect. General explanation is that vision tends to dominate over audition because of its higher spatial reliability. The underlying neural mechanisms remain unclear. We address this question via a biologically inspired neural network. The model contains two layers of unimodal visual and auditory neurons, with visual neurons having higher spatial resolution than auditory ones. Neurons within each layer communicate via lateral intra-layer synapses; neurons across layers are connected via inter-layer connections. The network accounts for the ventriloquism effect, ascribing it to a positive feedback between the visual and auditory neurons, triggered by residual auditory activity at the position of the visual stimulus. Main results are: i) the less localized stimulus is strongly biased toward the most localized stimulus and not vice versa; ii) amount of the ventriloquism effect changes with visual-auditory spatial disparity; iii) ventriloquism is a robust behavior of the network with respect to parameter value changes. Moreover, the model implements Hebbian rules for potentiation and depression of lateral synapses, to explain ventriloquism aftereffect (that is, the enduring sound shift after exposure to spatially disparate audio-visual stimuli). By adaptively changing the weights of lateral synapses during cross-modal stimulation, the model produces post-adaptive shifts of auditory localization that agree with in-vivo observations. The model demonstrates that two unimodal layers reciprocally interconnected may explain ventriloquism effect and aftereffect, even without the presence of any convergent multimodal area. The proposed study may provide advancement in understanding neural architecture and mechanisms at the basis of visual-auditory integration in the spatial realm.

  19. Deep convolutional neural networks for automatic classification of gastric carcinoma using whole slide images in digital histopathology.

    Science.gov (United States)

    Sharma, Harshita; Zerbe, Norman; Klempert, Iris; Hellwich, Olaf; Hufnagl, Peter

    2017-11-01

    Deep learning using convolutional neural networks is an actively emerging field in histological image analysis. This study explores deep learning methods for computer-aided classification in H&E stained histopathological whole slide images of gastric carcinoma. An introductory convolutional neural network architecture is proposed for two computerized applications, namely, cancer classification based on immunohistochemical response and necrosis detection based on the existence of tumor necrosis in the tissue. Classification performance of the developed deep learning approach is quantitatively compared with traditional image analysis methods in digital histopathology requiring prior computation of handcrafted features, such as statistical measures using gray level co-occurrence matrix, Gabor filter-bank responses, LBP histograms, gray histograms, HSV histograms and RGB histograms, followed by random forest machine learning. Additionally, the widely known AlexNet deep convolutional framework is comparatively analyzed for the corresponding classification problems. The proposed convolutional neural network architecture reports favorable results, with an overall classification accuracy of 0.6990 for cancer classification and 0.8144 for necrosis detection. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Large-scale simulations of plastic neural networks on neuromorphic hardware

    Directory of Open Access Journals (Sweden)

    James Courtney Knight

    2016-04-01

    Full Text Available SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 20000 neurons and 51200000 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models.

  1. Learning representations for the early detection of sepsis with deep neural networks.

    Science.gov (United States)

    Kam, Hye Jin; Kim, Ha Young

    2017-10-01

    Sepsis is one of the leading causes of death in intensive care unit patients. Early detection of sepsis is vital because mortality increases as the sepsis stage worsens. This study aimed to develop detection models for the early stage of sepsis using deep learning methodologies, and to compare the feasibility and performance of the new deep learning methodology with those of the regression method with conventional temporal feature extraction. Study group selection adhered to the InSight model. The results of the deep learning-based models and the InSight model were compared. With deep feedforward networks, the area under the ROC curve (AUC) of the models were 0.887 and 0.915 for the InSight and the new feature sets, respectively. For the model with the combined feature set, the AUC was the same as that of the basic feature set (0.915). For the long short-term memory model, only the basic feature set was applied and the AUC improved to 0.929 compared with the existing 0.887 of the InSight model. The contributions of this paper can be summarized in three ways: (i) improved performance without feature extraction using domain knowledge, (ii) verification of feature extraction capability of deep neural networks through comparison with reference features, and (iii) improved performance with feedforward neural networks using long short-term memory, a neural network architecture that can learn sequential patterns. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Distributed neural system for emotional intelligence revealed by lesion mapping.

    Science.gov (United States)

    Barbey, Aron K; Colom, Roberto; Grafman, Jordan

    2014-03-01

    Cognitive neuroscience has made considerable progress in understanding the neural architecture of human intelligence, identifying a broadly distributed network of frontal and parietal regions that support goal-directed, intelligent behavior. However, the contributions of this network to social and emotional aspects of intellectual function remain to be well characterized. Here we investigated the neural basis of emotional intelligence in 152 patients with focal brain injuries using voxel-based lesion-symptom mapping. Latent variable modeling was applied to obtain measures of emotional intelligence, general intelligence and personality from the Mayer, Salovey, Caruso Emotional Intelligence Test (MSCEIT), the Wechsler Adult Intelligence Scale and the Neuroticism-Extroversion-Openness Inventory, respectively. Regression analyses revealed that latent scores for measures of general intelligence and personality reliably predicted latent scores for emotional intelligence. Lesion mapping results further indicated that these convergent processes depend on a shared network of frontal, temporal and parietal brain regions. The results support an integrative framework for understanding the architecture of executive, social and emotional processes and make specific recommendations for the interpretation and application of the MSCEIT to the study of emotional intelligence in health and disease.

  3. Gap Filling of Daily Sea Levels by Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Lyubka Pashova

    2013-06-01

    Full Text Available In the recent years, intelligent methods as artificial neural networks are successfully applied for data analysis from different fields of the geosciences. One of the encountered practical problems is the availability of gaps in the time series that prevent their comprehensive usage for the scientific and practical purposes. The article briefly describes two types of the artificial neural network (ANN architectures - Feed-Forward Backpropagation (FFBP and recurrent Echo state network (ESN. In some cases, the ANN can be used as an alternative on the traditional methods, to fill in missing values in the time series. We have been conducted several experiments to fill the missing values of daily sea levels spanning a 5-years period using both ANN architectures. A multiple linear regression for the same purpose has been also applied. The sea level data are derived from the records of the tide gauge Burgas, which is located on the western Black Sea coast. The achieved results have shown that the performance of ANN models is better than that of the classical one and they are very promising for the real-time interpolation of missing data in the time series.

  4. Distributed neural system for emotional intelligence revealed by lesion mapping

    Science.gov (United States)

    Colom, Roberto; Grafman, Jordan

    2014-01-01

    Cognitive neuroscience has made considerable progress in understanding the neural architecture of human intelligence, identifying a broadly distributed network of frontal and parietal regions that support goal-directed, intelligent behavior. However, the contributions of this network to social and emotional aspects of intellectual function remain to be well characterized. Here we investigated the neural basis of emotional intelligence in 152 patients with focal brain injuries using voxel-based lesion-symptom mapping. Latent variable modeling was applied to obtain measures of emotional intelligence, general intelligence and personality from the Mayer, Salovey, Caruso Emotional Intelligence Test (MSCEIT), the Wechsler Adult Intelligence Scale and the Neuroticism-Extroversion-Openness Inventory, respectively. Regression analyses revealed that latent scores for measures of general intelligence and personality reliably predicted latent scores for emotional intelligence. Lesion mapping results further indicated that these convergent processes depend on a shared network of frontal, temporal and parietal brain regions. The results support an integrative framework for understanding the architecture of executive, social and emotional processes and make specific recommendations for the interpretation and application of the MSCEIT to the study of emotional intelligence in health and disease. PMID:23171618

  5. Modern architecture in a life cycle perspective

    DEFF Research Database (Denmark)

    Vestergaard, Inge

    2017-01-01

    By confronting the mistakes from the Modern Movement, the ideas of modernistic architecture are under pressure. This paper will summarize the primary architectural mistakes of the mono-functional thinking in planning and building and the non-appropriate environmental dispositions of the big plans...

  6. Neural Parallel Engine: A toolbox for massively parallel neural signal processing.

    Science.gov (United States)

    Tam, Wing-Kin; Yang, Zhi

    2018-05-01

    Large-scale neural recordings provide detailed information on neuronal activities and can help elicit the underlying neural mechanisms of the brain. However, the computational burden is also formidable when we try to process the huge data stream generated by such recordings. In this study, we report the development of Neural Parallel Engine (NPE), a toolbox for massively parallel neural signal processing on graphical processing units (GPUs). It offers a selection of the most commonly used routines in neural signal processing such as spike detection and spike sorting, including advanced algorithms such as exponential-component-power-component (EC-PC) spike detection and binary pursuit spike sorting. We also propose a new method for detecting peaks in parallel through a parallel compact operation. Our toolbox is able to offer a 5× to 110× speedup compared with its CPU counterparts depending on the algorithms. A user-friendly MATLAB interface is provided to allow easy integration of the toolbox into existing workflows. Previous efforts on GPU neural signal processing only focus on a few rudimentary algorithms, are not well-optimized and often do not provide a user-friendly programming interface to fit into existing workflows. There is a strong need for a comprehensive toolbox for massively parallel neural signal processing. A new toolbox for massively parallel neural signal processing has been created. It can offer significant speedup in processing signals from large-scale recordings up to thousands of channels. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Adaptive Neuron Model: An architecture for the rapid learning of nonlinear topological transformations

    Science.gov (United States)

    Tawel, Raoul (Inventor)

    1994-01-01

    A method for the rapid learning of nonlinear mappings and topological transformations using a dynamically reconfigurable artificial neural network is presented. This fully-recurrent Adaptive Neuron Model (ANM) network was applied to the highly degenerate inverse kinematics problem in robotics, and its performance evaluation is bench-marked. Once trained, the resulting neuromorphic architecture was implemented in custom analog neural network hardware and the parameters capturing the functional transformation downloaded onto the system. This neuroprocessor, capable of 10(exp 9) ops/sec, was interfaced directly to a three degree of freedom Heathkit robotic manipulator. Calculation of the hardware feed-forward pass for this mapping was benchmarked at approximately 10 microsec.

  8. Internal mechanisms underlying anticipatory language processing: Evidence from event-related-potentials and neural oscillations.

    Science.gov (United States)

    Li, Xiaoqing; Zhang, Yuping; Xia, Jinyan; Swaab, Tamara Y

    2017-07-28

    Although numerous studies have demonstrated that the language processing system can predict upcoming content during comprehension, there is still no clear picture of the anticipatory stage of predictive processing. This electroencephalograph study examined the cognitive and neural oscillatory mechanisms underlying anticipatory processing during language comprehension, and the consequences of this prediction for bottom-up processing of predicted/unpredicted content. Participants read Mandarin Chinese sentences that were either strongly or weakly constraining and that contained critical nouns that were congruent or incongruent with the sentence contexts. We examined the effects of semantic predictability on anticipatory processing prior to the onset of the critical nouns and on integration of the critical nouns. The results revealed that, at the integration stage, the strong-constraint condition (compared to the weak-constraint condition) elicited a reduced N400 and reduced theta activity (4-7Hz) for the congruent nouns, but induced beta (13-18Hz) and theta (4-7Hz) power decreases for the incongruent nouns, indicating benefits of confirmed predictions and potential costs of disconfirmed predictions. More importantly, at the anticipatory stage, the strongly constraining context elicited an enhanced sustained anterior negativity and beta power decrease (19-25Hz), which indicates that strong prediction places a higher processing load on the anticipatory stage of processing. The differences (in the ease of processing and the underlying neural oscillatory activities) between anticipatory and integration stages of lexical processing were discussed with regard to predictive processing models. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Invariant moments based convolutional neural networks for image analysis

    Directory of Open Access Journals (Sweden)

    Vijayalakshmi G.V. Mahesh

    2017-01-01

    Full Text Available The paper proposes a method using convolutional neural network to effectively evaluate the discrimination between face and non face patterns, gender classification using facial images and facial expression recognition. The novelty of the method lies in the utilization of the initial trainable convolution kernels coefficients derived from the zernike moments by varying the moment order. The performance of the proposed method was compared with the convolutional neural network architecture that used random kernels as initial training parameters. The multilevel configuration of zernike moments was significant in extracting the shape information suitable for hierarchical feature learning to carry out image analysis and classification. Furthermore the results showed an outstanding performance of zernike moment based kernels in terms of the computation time and classification accuracy.

  10. Neural networks for feedback feedforward nonlinear control systems.

    Science.gov (United States)

    Parisini, T; Zoppoli, R

    1994-01-01

    This paper deals with the problem of designing feedback feedforward control strategies to drive the state of a dynamic system (in general, nonlinear) so as to track any desired trajectory joining the points of given compact sets, while minimizing a certain cost function (in general, nonquadratic). Due to the generality of the problem, conventional methods are difficult to apply. Thus, an approximate solution is sought by constraining control strategies to take on the structure of multilayer feedforward neural networks. After discussing the approximation properties of neural control strategies, a particular neural architecture is presented, which is based on what has been called the "linear-structure preserving principle". The original functional problem is then reduced to a nonlinear programming one, and backpropagation is applied to derive the optimal values of the synaptic weights. Recursive equations to compute the gradient components are presented, which generalize the classical adjoint system equations of N-stage optimal control theory. Simulation results related to nonlinear nonquadratic problems show the effectiveness of the proposed method.

  11. Nonlinear identification of process dynamics using neural networks

    International Nuclear Information System (INIS)

    Parlos, A.G.; Atiya, A.F.; Chong, K.T.

    1992-01-01

    In this paper the nonlinear identification of process dynamics encountered in nuclear power plant components is addressed, in an input-output sense, using artificial neural systems. A hybrid feedforward/feedback neural network, namely, a recurrent multilayer perceptron, is used as the model structure to be identified. The feedforward portion of the network architecture provides its well-known interpolation property, while through recurrency and cross-talk, the local information feedback enables representation of temporal variations in the system nonlinearities. The standard backpropagation learning algorithm is modified, and it is used for the supervised training of the proposed hybrid network. The performance of recurrent multilayer perceptron networks in identifying process dynamics is investigated via the case study of a U-tube steam generator. The response of representative steam generator is predicted using a neural network, and it is compared to the response obtained from a sophisticated computer model based on first principles. The transient responses compare well, although further research is warranted to determine the predictive capabilities of these networks during more severe operational transients and accident scenarios

  12. Chinese Sentence Classification Based on Convolutional Neural Network

    Science.gov (United States)

    Gu, Chengwei; Wu, Ming; Zhang, Chuang

    2017-10-01

    Sentence classification is one of the significant issues in Natural Language Processing (NLP). Feature extraction is often regarded as the key point for natural language processing. Traditional ways based on machine learning can not take high level features into consideration, such as Naive Bayesian Model. The neural network for sentence classification can make use of contextual information to achieve greater results in sentence classification tasks. In this paper, we focus on classifying Chinese sentences. And the most important is that we post a novel architecture of Convolutional Neural Network (CNN) to apply on Chinese sentence classification. In particular, most of the previous methods often use softmax classifier for prediction, we embed a linear support vector machine to substitute softmax in the deep neural network model, minimizing a margin-based loss to get a better result. And we use tanh as an activation function, instead of ReLU. The CNN model improve the result of Chinese sentence classification tasks. Experimental results on the Chinese news title database validate the effectiveness of our model.

  13. Polarity-specific high-level information propagation in neural networks.

    Science.gov (United States)

    Lin, Yen-Nan; Chang, Po-Yen; Hsiao, Pao-Yueh; Lo, Chung-Chuan

    2014-01-01

    Analyzing the connectome of a nervous system provides valuable information about the functions of its subsystems. Although much has been learned about the architectures of neural networks in various organisms by applying analytical tools developed for general networks, two distinct and functionally important properties of neural networks are often overlooked. First, neural networks are endowed with polarity at the circuit level: Information enters a neural network at input neurons, propagates through interneurons, and leaves via output neurons. Second, many functions of nervous systems are implemented by signal propagation through high-level pathways involving multiple and often recurrent connections rather than by the shortest paths between nodes. In the present study, we analyzed two neural networks: the somatic nervous system of Caenorhabditis elegans (C. elegans) and the partial central complex network of Drosophila, in light of these properties. Specifically, we quantified high-level propagation in the vertical and horizontal directions: the former characterizes how signals propagate from specific input nodes to specific output nodes and the latter characterizes how a signal from a specific input node is shared by all output nodes. We found that the two neural networks are characterized by very efficient vertical and horizontal propagation. In comparison, classic small-world networks show a trade-off between vertical and horizontal propagation; increasing the rewiring probability improves the efficiency of horizontal propagation but worsens the efficiency of vertical propagation. Our result provides insights into how the complex functions of natural neural networks may arise from a design that allows them to efficiently transform and combine input signals.

  14. Autoshaped choice in artificial neural networks: implications for behavioral economics and neuroeconomics.

    Science.gov (United States)

    Burgos, José E; García-Leal, Óscar

    2015-05-01

    An existing neural network model of conditioning was used to simulate autoshaped choice. In this phenomenon, pigeons first receive an autoshaping procedure with two keylight stimuli X and Y separately paired with food in a forward-delay manner, intermittently for X and continuously for Y. Then pigeons receive unreinforced choice test trials of X and Y concurrently present. Most pigeons choose Y. This preference for a more valuable response alternative is a form of economic behavior that makes the phenomenon relevant to behavioral economics. The phenomenon also suggests a role for Pavlovian contingencies in economic behavior. The model used, in contrast to others, predicts autoshaping and automaintenance, so it is uniquely positioned to predict autoshaped choice. The model also contemplates neural substrates of economic behavior in neuroeconomics, such as dopaminergic and hippocampal systems. A feedforward neural network architecture was designed to simulate a neuroanatomical differentiation between two environment-behavior relations X-R1 and Y-R2, [corrected] where R1 and R2 denote two different emitted responses (not unconditionally elicited by the reward). Networks with this architecture received a training protocol that simulated an autoshaped-choice procedure. Most networks simulated the phenomenon. Implications for behavioral economics and neuroeconomics, limitations, and the issue of model appraisal are discussed. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Artificial Astrocytes Improve Neural Network Performance

    Science.gov (United States)

    Porto-Pazos, Ana B.; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-01-01

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function. PMID:21526157

  16. Artificial astrocytes improve neural network performance.

    Directory of Open Access Journals (Sweden)

    Ana B Porto-Pazos

    Full Text Available Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN and artificial neuron-glia networks (NGN to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function.

  17. Artificial astrocytes improve neural network performance.

    Science.gov (United States)

    Porto-Pazos, Ana B; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-04-19

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function.

  18. The role of stochasticity in an information-optimal neural population code

    International Nuclear Information System (INIS)

    Stocks, N G; Nikitin, A P; McDonnell, M D; Morse, R P

    2009-01-01

    In this paper we consider the optimisation of Shannon mutual information (MI) in the context of two model neural systems. The first is a stochastic pooling network (population) of McCulloch-Pitts (MP) type neurons (logical threshold units) subject to stochastic forcing; the second is (in a rate coding paradigm) a population of neurons that each displays Poisson statistics (the so called 'Poisson neuron'). The mutual information is optimised as a function of a parameter that characterises the 'noise level'-in the MP array this parameter is the standard deviation of the noise; in the population of Poisson neurons it is the window length used to determine the spike count. In both systems we find that the emergent neural architecture and, hence, code that maximises the MI is strongly influenced by the noise level. Low noise levels leads to a heterogeneous distribution of neural parameters (diversity), whereas, medium to high noise levels result in the clustering of neural parameters into distinct groups that can be interpreted as subpopulations. In both cases the number of subpopulations increases with a decrease in noise level. Our results suggest that subpopulations are a generic feature of an information optimal neural population.

  19. An analysis of nonlinear dynamics underlying neural activity related to auditory induction in the rat auditory cortex.

    Science.gov (United States)

    Noto, M; Nishikawa, J; Tateno, T

    2016-03-24

    A sound interrupted by silence is perceived as discontinuous. However, when high-intensity noise is inserted during the silence, the missing sound may be perceptually restored and be heard as uninterrupted. This illusory phenomenon is called auditory induction. Recent electrophysiological studies have revealed that auditory induction is associated with the primary auditory cortex (A1). Although experimental evidence has been accumulating, the neural mechanisms underlying auditory induction in A1 neurons are poorly understood. To elucidate this, we used both experimental and computational approaches. First, using an optical imaging method, we characterized population responses across auditory cortical fields to sound and identified five subfields in rats. Next, we examined neural population activity related to auditory induction with high temporal and spatial resolution in the rat auditory cortex (AC), including the A1 and several other AC subfields. Our imaging results showed that tone-burst stimuli interrupted by a silent gap elicited early phasic responses to the first tone and similar or smaller responses to the second tone following the gap. In contrast, tone stimuli interrupted by broadband noise (BN), considered to cause auditory induction, considerably suppressed or eliminated responses to the tone following the noise. Additionally, tone-burst stimuli that were interrupted by notched noise centered at the tone frequency, which is considered to decrease the strength of auditory induction, partially restored the second responses from the suppression caused by BN. To phenomenologically mimic the neural population activity in the A1 and thus investigate the mechanisms underlying auditory induction, we constructed a computational model from the periphery through the AC, including a nonlinear dynamical system. The computational model successively reproduced some of the above-mentioned experimental results. Therefore, our results suggest that a nonlinear, self

  20. Modern architecture in a life cycle perspective

    DEFF Research Database (Denmark)

    Vestergaard, Inge

    2017-01-01

    By confronting the mistakes from the Modern Movement, the ideas of modernistic architecture are under pressure. This paper will summarize the primary architectural mistakes of the mono-functional thinking in planning and building and the non-appropriate environmental dispositions of the big plans...... architectural transformations on city level and on housing level. The transformation goals are to secure the economy and the social and the environmental aspects in the transformation´s life-cycle perspective in order to make the buildings and the districts interact with and adapt to society. The conclusion...... points out the architectural consequences of prioritizing in the transformation process the social parameters higher than the original rigid architectural theories....

  1. Application of fuzzy neural network technologies in management of transport and logistics processes in Arctic

    Science.gov (United States)

    Levchenko, N. G.; Glushkov, S. V.; Sobolevskaya, E. Yu; Orlov, A. P.

    2018-05-01

    The method of modeling the transport and logistics process using fuzzy neural network technologies has been considered. The analysis of the implemented fuzzy neural network model of the information management system of transnational multimodal transportation of the process showed the expediency of applying this method to the management of transport and logistics processes in the Arctic and Subarctic conditions. The modular architecture of this model can be expanded by incorporating additional modules, since the working conditions in the Arctic and the subarctic themselves will present more and more realistic tasks. The architecture allows increasing the information management system, without affecting the system or the method itself. The model has a wide range of application possibilities, including: analysis of the situation and behavior of interacting elements; dynamic monitoring and diagnostics of management processes; simulation of real events and processes; prediction and prevention of critical situations.

  2. Hybrid neural network bushing model for vehicle dynamics simulation

    International Nuclear Information System (INIS)

    Sohn, Jeong Hyun; Lee, Seung Kyu; Yoo, Wan Suk

    2008-01-01

    Although the linear model was widely used for the bushing model in vehicle suspension systems, it could not express the nonlinear characteristics of bushing in terms of the amplitude and the frequency. An artificial neural network model was suggested to consider the hysteretic responses of bushings. This model, however, often diverges due to the uncertainties of the neural network under the unexpected excitation inputs. In this paper, a hybrid neural network bushing model combining linear and neural network is suggested. A linear model was employed to represent linear stiffness and damping effects, and the artificial neural network algorithm was adopted to take into account the hysteretic responses. A rubber test was performed to capture bushing characteristics, where sine excitation with different frequencies and amplitudes is applied. Random test results were used to update the weighting factors of the neural network model. It is proven that the proposed model has more robust characteristics than a simple neural network model under step excitation input. A full car simulation was carried out to verify the proposed bushing models. It was shown that the hybrid model results are almost identical to the linear model under several maneuvers

  3. Optimal neural networks for protein-structure prediction

    International Nuclear Information System (INIS)

    Head-Gordon, T.; Stillinger, F.H.

    1993-01-01

    The successful application of neural-network algorithms for prediction of protein structure is stymied by three problem areas: the sparsity of the database of known protein structures, poorly devised network architectures which make the input-output mapping opaque, and a global optimization problem in the multiple-minima space of the network variables. We present a simplified polypeptide model residing in two dimensions with only two amino-acid types, A and B, which allows the determination of the global energy structure for all possible sequences of pentamer, hexamer, and heptamer lengths. This model simplicity allows us to compile a complete structural database and to devise neural networks that reproduce the tertiary structure of all sequences with absolute accuracy and with the smallest number of network variables. These optimal networks reveal that the three problem areas are convoluted, but that thoughtful network designs can actually deconvolute these detrimental traits to provide network algorithms that genuinely impact on the ability of the network to generalize or learn the desired mappings. Furthermore, the two-dimensional polypeptide model shows sufficient chemical complexity so that transfer of neural-network technology to more realistic three-dimensional proteins is evident

  4. A Neural Network Model to Learn Multiple Tasks under Dynamic Environments

    Science.gov (United States)

    Tsumori, Kenji; Ozawa, Seiichi

    When environments are dynamically changed for agents, the knowledge acquired in an environment might be useless in future. In such dynamic environments, agents should be able to not only acquire new knowledge but also modify old knowledge in learning. However, modifying all knowledge acquired before is not efficient because the knowledge once acquired may be useful again when similar environment reappears and some knowledge can be shared among different environments. To learn efficiently in such environments, we propose a neural network model that consists of the following modules: resource allocating network, long-term & short-term memory, and environment change detector. We evaluate the model under a class of dynamic environments where multiple function approximation tasks are sequentially given. The experimental results demonstrate that the proposed model possesses stable incremental learning, accurate environmental change detection, proper association and recall of old knowledge, and efficient knowledge transfer.

  5. A neural-based remote eye gaze tracker under natural head motion.

    Science.gov (United States)

    Torricelli, Diego; Conforto, Silvia; Schmid, Maurizio; D'Alessio, Tommaso

    2008-10-01

    A novel approach to view-based eye gaze tracking for human computer interface (HCI) is presented. The proposed method combines different techniques to address the problems of head motion, illumination and usability in the framework of low cost applications. Feature detection and tracking algorithms have been designed to obtain an automatic setup and strengthen the robustness to light conditions. An extensive analysis of neural solutions has been performed to deal with the non-linearity associated with gaze mapping under free-head conditions. No specific hardware, such as infrared illumination or high-resolution cameras, is needed, rather a simple commercial webcam working in visible light spectrum suffices. The system is able to classify the gaze direction of the user over a 15-zone graphical interface, with a success rate of 95% and a global accuracy of around 2 degrees , comparable with the vast majority of existing remote gaze trackers.

  6. Classification of remotely sensed data using OCR-inspired neural network techniques. [Optical Character Recognition

    Science.gov (United States)

    Kiang, Richard K.

    1992-01-01

    Neural networks have been applied to classifications of remotely sensed data with some success. To improve the performance of this approach, an examination was made of how neural networks are applied to the optical character recognition (OCR) of handwritten digits and letters. A three-layer, feedforward network, along with techniques adopted from OCR, was used to classify Landsat-4 Thematic Mapper data. Good results were obtained. To overcome the difficulties that are characteristic of remote sensing applications and to attain significant improvements in classification accuracy, a special network architecture may be required.

  7. Selection of hadronic W-decays in DELPHI with feed forward neural networks - An update

    CERN Document Server

    Becks, K H; Müller, U; Wahlen, H

    2003-01-01

    Since 1998 feed forward neural networks have been successfully applied to select candidates of hadronic W-decays measured at different center of mass-energies by the DELPHI collaboration at the Large Electron Positron collider at CERN. To prepare the final publication, the neural network was adapted to all center of mass- energies. Detailed studies were performed concerning the level of preselection, the choice of network parameters and especially of the network architecture. The number of hidden nodes was optimized by testing different pruning methods. All studies and results will be discussed.

  8. Selection of hadronic W-decays in DELPHI with feed forward neural networks - an update

    International Nuclear Information System (INIS)

    Becks, K.-H.; Drees, J.; Mueller, U.; Wahlen, H.

    2003-01-01

    Since 1998 feed forward neural networks have been successfully applied to select candidates of hadronic W-decays measured at different center of mass-energies by the DELPHI collaboration at the Large Electron Positron collider at CERN. To prepare the final publication, the neural network was adapted to all center of mass-energies. Detailed studies were performed concerning the level of preselection, the choice of network parameters and especially of the network architecture. The number of hidden nodes was optimized by testing different pruning methods. All studies and results will be discussed

  9. Advances in Artificial Neural Networks – Methodological Development and Application

    Directory of Open Access Journals (Sweden)

    Yanbo Huang

    2009-08-01

    Full Text Available Artificial neural networks as a major soft-computing technology have been extensively studied and applied during the last three decades. Research on backpropagation training algorithms for multilayer perceptron networks has spurred development of other neural network training algorithms for other networks such as radial basis function, recurrent network, feedback network, and unsupervised Kohonen self-organizing network. These networks, especially the multilayer perceptron network with a backpropagation training algorithm, have gained recognition in research and applications in various scientific and engineering areas. In order to accelerate the training process and overcome data over-fitting, research has been conducted to improve the backpropagation algorithm. Further, artificial neural networks have been integrated with other advanced methods such as fuzzy logic and wavelet analysis, to enhance the ability of data interpretation and modeling and to avoid subjectivity in the operation of the training algorithm. In recent years, support vector machines have emerged as a set of high-performance supervised generalized linear classifiers in parallel with artificial neural networks. A review on development history of artificial neural networks is presented and the standard architectures and algorithms of artificial neural networks are described. Furthermore, advanced artificial neural networks will be introduced with support vector machines, and limitations of ANNs will be identified. The future of artificial neural network development in tandem with support vector machines will be discussed in conjunction with further applications to food science and engineering, soil and water relationship for crop management, and decision support for precision agriculture. Along with the network structures and training algorithms, the applications of artificial neural networks will be reviewed as well, especially in the fields of agricultural and biological

  10. Neural mechanisms of human perceptual choice under focused and divided attention.

    Science.gov (United States)

    Wyart, Valentin; Myers, Nicholas E; Summerfield, Christopher

    2015-02-25

    Perceptual decisions occur after the evaluation and integration of momentary sensory inputs, and dividing attention between spatially disparate sources of information impairs decision performance. However, it remains unknown whether dividing attention degrades the precision of sensory signals, precludes their conversion into decision signals, or dampens the integration of decision information toward an appropriate response. Here we recorded human electroencephalographic (EEG) activity while participants categorized one of two simultaneous and independent streams of visual gratings according to their average tilt. By analyzing trial-by-trial correlations between EEG activity and the information offered by each sample, we obtained converging behavioral and neural evidence that dividing attention between left and right visual fields does not dampen the encoding of sensory or decision information. Under divided attention, momentary decision information from both visual streams was encoded in slow parietal signals without interference but was lost downstream during their integration as reflected in motor mu- and beta-band (10-30 Hz) signals, resulting in a "leaky" accumulation process that conferred greater behavioral influence to more recent samples. By contrast, sensory inputs that were explicitly cued as irrelevant were not converted into decision signals. These findings reveal that a late cognitive bottleneck on information integration limits decision performance under divided attention, and places new capacity constraints on decision-theoretic models of information integration under cognitive load. Copyright © 2015 the authors 0270-6474/15/353485-14$15.00/0.

  11. Temporal neural networks and transient analysis of complex engineering systems

    Science.gov (United States)

    Uluyol, Onder

    A theory is introduced for a multi-layered Local Output Gamma Feedback (LOGF) neural network within the paradigm of Locally-Recurrent Globally-Feedforward neural networks. It is developed for the identification, prediction, and control tasks of spatio-temporal systems and allows for the presentation of different time scales through incorporation of a gamma memory. It is initially applied to the tasks of sunspot and Mackey-Glass series prediction as benchmarks, then it is extended to the task of power level control of a nuclear reactor at different fuel cycle conditions. The developed LOGF neuron model can also be viewed as a Transformed Input and State (TIS) Gamma memory for neural network architectures for temporal processing. The novel LOGF neuron model extends the static neuron model by incorporating into it a short-term memory structure in the form of a digital gamma filter. A feedforward neural network made up of LOGF neurons can thus be used to model dynamic systems. A learning algorithm based upon the Backpropagation-Through-Time (BTT) approach is derived. It is applicable for training a general L-layer LOGF neural network. The spatial and temporal weights and parameters of the network are iteratively optimized for a given problem using the derived learning algorithm.

  12. Optical neural network system for pose determination of spinning satellites

    Science.gov (United States)

    Lee, Andrew; Casasent, David

    1990-01-01

    An optical neural network architecture and algorithm based on a Hopfield optimization network are presented for multitarget tracking. This tracker utilizes a neuron for every possible target track, and a quadratic energy function of neural activities which is minimized using gradient descent neural evolution. The neural net tracker is demonstrated as part of a system for determining position and orientation (pose) of spinning satellites with respect to a robotic spacecraft. The input to the system is time sequence video from a single camera. Novelty detection and filtering are utilized to locate and segment novel regions from the input images. The neural net multitarget tracker determines the correspondences (or tracks) of the novel regions as a function of time, and hence the paths of object (satellite) parts. The path traced out by a given part or region is approximately elliptical in image space, and the position, shape and orientation of the ellipse are functions of the satellite geometry and its pose. Having a geometric model of the satellite, and the elliptical path of a part in image space, the three-dimensional pose of the satellite is determined. Digital simulation results using this algorithm are presented for various satellite poses and lighting conditions.

  13. Lifelong learning of human actions with deep neural network self-organization.

    Science.gov (United States)

    Parisi, German I; Tani, Jun; Weber, Cornelius; Wermter, Stefan

    2017-12-01

    Lifelong learning is fundamental in autonomous robotics for the acquisition and fine-tuning of knowledge through experience. However, conventional deep neural models for action recognition from videos do not account for lifelong learning but rather learn a batch of training data with a predefined number of action classes and samples. Thus, there is the need to develop learning systems with the ability to incrementally process available perceptual cues and to adapt their responses over time. We propose a self-organizing neural architecture for incrementally learning to classify human actions from video sequences. The architecture comprises growing self-organizing networks equipped with recurrent neurons for processing time-varying patterns. We use a set of hierarchically arranged recurrent networks for the unsupervised learning of action representations with increasingly large spatiotemporal receptive fields. Lifelong learning is achieved in terms of prediction-driven neural dynamics in which the growth and the adaptation of the recurrent networks are driven by their capability to reconstruct temporally ordered input sequences. Experimental results on a classification task using two action benchmark datasets show that our model is competitive with state-of-the-art methods for batch learning also when a significant number of sample labels are missing or corrupted during training sessions. Additional experiments show the ability of our model to adapt to non-stationary input avoiding catastrophic interference. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  14. Encoding Time in Feedforward Trajectories of a Recurrent Neural Network Model.

    Science.gov (United States)

    Hardy, N F; Buonomano, Dean V

    2018-02-01

    Brain activity evolves through time, creating trajectories of activity that underlie sensorimotor processing, behavior, and learning and memory. Therefore, understanding the temporal nature of neural dynamics is essential to understanding brain function and behavior. In vivo studies have demonstrated that sequential transient activation of neurons can encode time. However, it remains unclear whether these patterns emerge from feedforward network architectures or from recurrent networks and, furthermore, what role network structure plays in timing. We address these issues using a recurrent neural network (RNN) model with distinct populations of excitatory and inhibitory units. Consistent with experimental data, a single RNN could autonomously produce multiple functionally feedforward trajectories, thus potentially encoding multiple timed motor patterns lasting up to several seconds. Importantly, the model accounted for Weber's law, a hallmark of timing behavior. Analysis of network connectivity revealed that efficiency-a measure of network interconnectedness-decreased as the number of stored trajectories increased. Additionally, the balance of excitation (E) and inhibition (I) shifted toward excitation during each unit's activation time, generating the prediction that observed sequential activity relies on dynamic control of the E/I balance. Our results establish for the first time that the same RNN can generate multiple functionally feedforward patterns of activity as a result of dynamic shifts in the E/I balance imposed by the connectome of the RNN. We conclude that recurrent network architectures account for sequential neural activity, as well as for a fundamental signature of timing behavior: Weber's law.

  15. Phylogenetic convolutional neural networks in metagenomics.

    Science.gov (United States)

    Fioravanti, Diego; Giarratano, Ylenia; Maggio, Valerio; Agostinelli, Claudio; Chierici, Marco; Jurman, Giuseppe; Furlanello, Cesare

    2018-03-08

    Convolutional Neural Networks can be effectively used only when data are endowed with an intrinsic concept of neighbourhood in the input space, as is the case of pixels in images. We introduce here Ph-CNN, a novel deep learning architecture for the classification of metagenomics data based on the Convolutional Neural Networks, with the patristic distance defined on the phylogenetic tree being used as the proximity measure. The patristic distance between variables is used together with a sparsified version of MultiDimensional Scaling to embed the phylogenetic tree in a Euclidean space. Ph-CNN is tested with a domain adaptation approach on synthetic data and on a metagenomics collection of gut microbiota of 38 healthy subjects and 222 Inflammatory Bowel Disease patients, divided in 6 subclasses. Classification performance is promising when compared to classical algorithms like Support Vector Machines and Random Forest and a baseline fully connected neural network, e.g. the Multi-Layer Perceptron. Ph-CNN represents a novel deep learning approach for the classification of metagenomics data. Operatively, the algorithm has been implemented as a custom Keras layer taking care of passing to the following convolutional layer not only the data but also the ranked list of neighbourhood of each sample, thus mimicking the case of image data, transparently to the user.

  16. EMG-Based Estimation of Limb Movement Using Deep Learning With Recurrent Convolutional Neural Networks.

    Science.gov (United States)

    Xia, Peng; Hu, Jie; Peng, Yinghong

    2017-10-25

    A novel model based on deep learning is proposed to estimate kinematic information for myoelectric control from multi-channel electromyogram (EMG) signals. The neural information of limb movement is embedded in EMG signals that are influenced by all kinds of factors. In order to overcome the negative effects of variability in signals, the proposed model employs the deep architecture combining convolutional neural networks (CNNs) and recurrent neural networks (RNNs). The EMG signals are transformed to time-frequency frames as the input to the model. The limb movement is estimated by the model that is trained with the gradient descent and backpropagation procedure. We tested the model for simultaneous and proportional estimation of limb movement in eight healthy subjects and compared it with support vector regression (SVR) and CNNs on the same data set. The experimental studies show that the proposed model has higher estimation accuracy and better robustness with respect to time. The combination of CNNs and RNNs can improve the model performance compared with using CNNs alone. The model of deep architecture is promising in EMG decoding and optimization of network structures can increase the accuracy and robustness. © 2017 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  17. 3D multi-view convolutional neural networks for lung nodule classification

    Science.gov (United States)

    Kang, Guixia; Hou, Beibei; Zhang, Ningbo

    2017-01-01

    The 3D convolutional neural network (CNN) is able to make full use of the spatial 3D context information of lung nodules, and the multi-view strategy has been shown to be useful for improving the performance of 2D CNN in classifying lung nodules. In this paper, we explore the classification of lung nodules using the 3D multi-view convolutional neural networks (MV-CNN) with both chain architecture and directed acyclic graph architecture, including 3D Inception and 3D Inception-ResNet. All networks employ the multi-view-one-network strategy. We conduct a binary classification (benign and malignant) and a ternary classification (benign, primary malignant and metastatic malignant) on Computed Tomography (CT) images from Lung Image Database Consortium and Image Database Resource Initiative database (LIDC-IDRI). All results are obtained via 10-fold cross validation. As regards the MV-CNN with chain architecture, results show that the performance of 3D MV-CNN surpasses that of 2D MV-CNN by a significant margin. Finally, a 3D Inception network achieved an error rate of 4.59% for the binary classification and 7.70% for the ternary classification, both of which represent superior results for the corresponding task. We compare the multi-view-one-network strategy with the one-view-one-network strategy. The results reveal that the multi-view-one-network strategy can achieve a lower error rate than the one-view-one-network strategy. PMID:29145492

  18. Exponential synchronization of delayed neutral-type neural networks with Lévy noise under non-Lipschitz condition

    Science.gov (United States)

    Ma, Shuo; Kang, Yanmei

    2018-04-01

    In this paper, the exponential synchronization of stochastic neutral-type neural networks with time-varying delay and Lévy noise under non-Lipschitz condition is investigated for the first time. Using the general Itô's formula and the nonnegative semi-martingale convergence theorem, we derive general sufficient conditions of two kinds of exponential synchronization for the drive system and the response system with adaptive control. Numerical examples are presented to verify the effectiveness of the proposed criteria.

  19. Batch Policy Gradient Methods for Improving Neural Conversation Models

    OpenAIRE

    Kandasamy, Kirthevasan; Bachrach, Yoram; Tomioka, Ryota; Tarlow, Daniel; Carter, David

    2017-01-01

    We study reinforcement learning of chatbots with recurrent neural network architectures when the rewards are noisy and expensive to obtain. For instance, a chatbot used in automated customer service support can be scored by quality assurance agents, but this process can be expensive, time consuming and noisy. Previous reinforcement learning work for natural language processing uses on-policy updates and/or is designed for on-line learning settings. We demonstrate empirically that such strateg...

  20. Neural coding in graphs of bidirectional associative memories.

    Science.gov (United States)

    Bouchain, A David; Palm, Günther

    2012-01-24

    In the last years we have developed large neural network models for the realization of complex cognitive tasks in a neural network architecture that resembles the network of the cerebral cortex. We have used networks of several cortical modules that contain two populations of neurons (one excitatory, one inhibitory). The excitatory populations in these so-called "cortical networks" are organized as a graph of Bidirectional Associative Memories (BAMs), where edges of the graph correspond to BAMs connecting two neural modules and nodes of the graph correspond to excitatory populations with associative feedback connections (and inhibitory interneurons). The neural code in each of these modules consists essentially of the firing pattern of the excitatory population, where mainly it is the subset of active neurons that codes the contents to be represented. The overall activity can be used to distinguish different properties of the patterns that are represented which we need to distinguish and control when performing complex tasks like language understanding with these cortical networks. The most important pattern properties or situations are: exactly fitting or matching input, incomplete information or partially matching pattern, superposition of several patterns, conflicting information, and new information that is to be learned. We show simple simulations of these situations in one area or module and discuss how to distinguish these situations based on the overall internal activation of the module. This article is part of a Special Issue entitled "Neural Coding". Copyright © 2011 Elsevier B.V. All rights reserved.

  1. Fundamental study on the interpretation technique for 3-D MT data using neural networks. 2; Neural network wo mochiita sanjigen MT ho data kaishaku gijutsu ni kansuru kisoteki kenkyu. 2

    Energy Technology Data Exchange (ETDEWEB)

    Fukuoka, K; Kobayashi, T [OYO Corp., Tokyo (Japan); Mogi, T [Kyushu University, Fukuoka (Japan). Faculty of Engineering; Spichak, V

    1997-10-22

    Behavior of neural networks relative to noise and the constitution of an optimum network are studied for the construction of a 3-D MT data interpretation system using neural networks. In the study, the relationship is examined between the noise level of educational data and the noise level of the neural network to be constructed. After examination it is found that the neural network is effective in interpreting data whose noise level is the same as that of educational data; it cannot correctly interpret data that it has not met in the educational stage even if such data is free of noise; that the optimum number of neurons in a hidden layer is approximately 40 in a network architecture using the current system; and that the neuron gain function enhances recognition capability when a logistic function is used in the hidden layer and a linear function is used in the output layer. 2 refs., 7 figs., 2 tabs.

  2. Improved head direction command classification using an optimised Bayesian neural network.

    Science.gov (United States)

    Nguyen, Son T; Nguyen, Hung T; Taylor, Philip B; Middleton, James

    2006-01-01

    Assistive technologies have recently emerged to improve the quality of life of severely disabled people by enhancing their independence in daily activities. Since many of those individuals have limited or non-existing control from the neck downward, alternative hands-free input modalities have become very important for these people to access assistive devices. In hands-free control, head movement has been proved to be a very effective user interface as it can provide a comfortable, reliable and natural way to access the device. Recently, neural networks have been shown to be useful not only for real-time pattern recognition but also for creating user-adaptive models. Since multi-layer perceptron neural networks trained using standard back-propagation may cause poor generalisation, the Bayesian technique has been proposed to improve the generalisation and robustness of these networks. This paper describes the use of Bayesian neural networks in developing a hands-free wheelchair control system. The experimental results show that with the optimised architecture, classification Bayesian neural networks can detect head commands of wheelchair users accurately irrespective to their levels of injuries.

  3. Using repetitive transcranial magnetic stimulation to study the underlying neural mechanisms of human motor learning and memory.

    Science.gov (United States)

    Censor, Nitzan; Cohen, Leonardo G

    2011-01-01

    In the last two decades, there has been a rapid development in the research of the physiological brain mechanisms underlying human motor learning and memory. While conventional memory research performed on animal models uses intracellular recordings, microfusion of protein inhibitors to specific brain areas and direct induction of focal brain lesions, human research has so far utilized predominantly behavioural approaches and indirect measurements of neural activity. Repetitive transcranial magnetic stimulation (rTMS), a safe non-invasive brain stimulation technique, enables the study of the functional role of specific cortical areas by evaluating the behavioural consequences of selective modulation of activity (excitation or inhibition) on memory generation and consolidation, contributing to the understanding of the neural substrates of motor learning. Depending on the parameters of stimulation, rTMS can also facilitate learning processes, presumably through purposeful modulation of excitability in specific brain regions. rTMS has also been used to gain valuable knowledge regarding the timeline of motor memory formation, from initial encoding to stabilization and long-term retention. In this review, we summarize insights gained using rTMS on the physiological and neural mechanisms of human motor learning and memory. We conclude by suggesting possible future research directions, some with direct clinical implications.

  4. Sustained Activity in Hierarchical Modular Neural Networks: Self-Organized Criticality and Oscillations

    Science.gov (United States)

    Wang, Sheng-Jun; Hilgetag, Claus C.; Zhou, Changsong

    2010-01-01

    Cerebral cortical brain networks possess a number of conspicuous features of structure and dynamics. First, these networks have an intricate, non-random organization. In particular, they are structured in a hierarchical modular fashion, from large-scale regions of the whole brain, via cortical areas and area subcompartments organized as structural and functional maps to cortical columns, and finally circuits made up of individual neurons. Second, the networks display self-organized sustained activity, which is persistent in the absence of external stimuli. At the systems level, such activity is characterized by complex rhythmical oscillations over a broadband background, while at the cellular level, neuronal discharges have been observed to display avalanches, indicating that cortical networks are at the state of self-organized criticality (SOC). We explored the relationship between hierarchical neural network organization and sustained dynamics using large-scale network modeling. Previously, it was shown that sparse random networks with balanced excitation and inhibition can sustain neural activity without external stimulation. We found that a hierarchical modular architecture can generate sustained activity better than random networks. Moreover, the system can simultaneously support rhythmical oscillations and SOC, which are not present in the respective random networks. The mechanism underlying the sustained activity is that each dense module cannot sustain activity on its own, but displays SOC in the presence of weak perturbations. Therefore, the hierarchical modular networks provide the coupling among subsystems with SOC. These results imply that the hierarchical modular architecture of cortical networks plays an important role in shaping the ongoing spontaneous activity of the brain, potentially allowing the system to take advantage of both the sensitivity of critical states and the predictability and timing of oscillations for efficient information

  5. Sustained activity in hierarchical modular neural networks: self-organized criticality and oscillations

    Directory of Open Access Journals (Sweden)

    Sheng-Jun Wang

    2011-06-01

    Full Text Available Cerebral cortical brain networks possess a number of conspicuous features of structure and dynamics. First, these networks have an intricate, non-random organization. They are structured in a hierarchical modular fashion, from large-scale regions of the whole brain, via cortical areas and area subcompartments organized as structural and functional maps to cortical columns, and finally circuits made up of individual neurons. Second, the networks display self-organized sustained activity, which is persistent in the absence of external stimuli. At the systems level, such activity is characterized by complex rhythmical oscillations over a broadband background, while at the cellular level, neuronal discharges have been observed to display avalanches, indicating that cortical networks are at the state of self-organized criticality. We explored the relationship between hierarchical neural network organization and sustained dynamics using large-scale network modeling. It was shown that sparse random networks with balanced excitation and inhibition can sustain neural activity without external stimulation. We find that a hierarchical modular architecture can generate sustained activity better than random networks. Moreover, the system can simultaneously support rhythmical oscillations and self-organized criticality, which are not present in the respective random networks. The underlying mechanism is that each dense module cannot sustain activity on its own, but displays self-organized criticality in the presence of weak perturbations. The hierarchical modular networks provide the coupling among subsystems with self-organized criticality. These results imply that the hierarchical modular architecture of cortical networks plays an important role in shaping the ongoing spontaneous activity of the brain, potentially allowing the system to take advantage of both the sensitivityof critical state and predictability and timing of oscillations for efficient

  6. Improvement of Wear Performance of Nano-Multilayer PVD Coatings under Dry Hard End Milling Conditions Based on Their Architectural Development

    Directory of Open Access Journals (Sweden)

    Shahereen Chowdhury

    2018-02-01

    Full Text Available The TiAlCrSiYN-based family of PVD (physical vapor deposition hard coatings was specially designed for extreme conditions involving the dry ultra-performance machining of hardened tool steels. However, there is a strong potential for further advances in the wear performance of the coatings through improvements in their architecture. A few different coating architectures (monolayer, multilayer, bi-multilayer, bi-multilayer with increased number of alternating nano-layers were studied in relation to cutting-tool life. Comprehensive characterization of the structure and properties of the coatings has been performed using XRD, SEM, TEM, micro-mechanical studies and tool-life evaluation. The wear performance was then related to the ability of the coating layer to exhibit minimal surface damage under operation, which is directly associated with the various micro-mechanical characteristics (such as hardness, elastic modulus and related characteristics; nano-impact; scratch test-based characteristics. The results presented exhibited that a substantial increase in tool life as well as improvement of the mechanical properties could be achieved through the architectural development of the coatings.

  7. Neural mechanisms underlying social conformity in an ultimatum game

    Directory of Open Access Journals (Sweden)

    Zhenyu eWei

    2013-12-01

    Full Text Available When individuals’ actions are incongruent with those of the group they belong to, they may change their initial behavior in order to conform to the group norm. This phenomenon is known as social conformity. In the present study, we used event-related functional magnetic resonance imaging (fMRI to investigate brain activity in response to group opinion during an ultimatum game. Results showed that participants changed their choices when these choices conflicted with the normative opinion of the group they were members of, especially in conditions of unfair treatment. The fMRI data revealed that a conflict with group norms activated the brain regions involved in norm violations and behavioral adjustment. Furthermore, in the reject-unfair condition, we observed that a conflict with group norms activated the medial frontal gyrus. These findings contribute to recent research examining neural mechanisms involved in detecting violations of social norms, and provide information regarding the neural representation of conformity behavior in an economic game.

  8. Primate brain architecture and selection in relation to sex.

    Science.gov (United States)

    Lindenfors, Patrik; Nunn, Charles L; Barton, Robert A

    2007-05-10

    Social and competitive demands often differ between the sexes in mammals. These differing demands should be expected to produce variation in the relative sizes of various brain structures. Sexual selection on males can be predicted to influence brain components handling sensory-motor skills that are important for physical competition or neural pathways involving aggression. Conversely, because female fitness is more closely linked to ecological factors and social interactions that enable better acquisition of resources, social selection on females should select for brain components important for navigating social networks. Sexual and social selection acting on one sex could produce sexual dimorphism in brain structures, which would result in larger species averages for those same brain structures. Alternatively, sex-specific selection pressures could produce correlated effects in the other sex, resulting in larger brain structures for both males and females of a species. Data are presently unavailable for the sex-specific sizes of brain structures for anthropoid primates, but under either scenario, the effects of sexual and social selection should leave a detectable signal in average sizes of brain structures for different species. The degree of male intra-sexual selection was positively correlated with several structures involved in autonomic functions and sensory-motor skills, and in pathways relating to aggression and aggression control. The degree of male intra-sexual selection was not correlated with relative neocortex size, which instead was significantly positively correlated with female social group size, but negatively correlated with male group size. Sexual selection on males and social selection on females have exerted different effects on primate brain architecture. Species with a higher degree of male intra-sexual selection carry a neural signature of an evolutionary history centered on physical conflicts, but no traces of increased demands on

  9. Primate brain architecture and selection in relation to sex

    Directory of Open Access Journals (Sweden)

    Nunn Charles L

    2007-05-01

    Full Text Available Abstract Background Social and competitive demands often differ between the sexes in mammals. These differing demands should be expected to produce variation in the relative sizes of various brain structures. Sexual selection on males can be predicted to influence brain components handling sensory-motor skills that are important for physical competition or neural pathways involving aggression. Conversely, because female fitness is more closely linked to ecological factors and social interactions that enable better acquisition of resources, social selection on females should select for brain components important for navigating social networks. Sexual and social selection acting on one sex could produce sexual dimorphism in brain structures, which would result in larger species averages for those same brain structures. Alternatively, sex-specific selection pressures could produce correlated effects in the other sex, resulting in larger brain structures for both males and females of a species. Data are presently unavailable for the sex-specific sizes of brain structures for anthropoid primates, but under either scenario, the effects of sexual and social selection should leave a detectable signal in average sizes of brain structures for different species. Results The degree of male intra-sexual selection was positively correlated with several structures involved in autonomic functions and sensory-motor skills, and in pathways relating to aggression and aggression control. The degree of male intra-sexual selection was not correlated with relative neocortex size, which instead was significantly positively correlated with female social group size, but negatively correlated with male group size. Conclusion Sexual selection on males and social selection on females have exerted different effects on primate brain architecture. Species with a higher degree of male intra-sexual selection carry a neural signature of an evolutionary history centered on

  10. Tectonic thinking in contemporary industrialized architecture

    DEFF Research Database (Denmark)

    Beim, Anne

    2013-01-01

    a creative force in building constructions, structural features and architectural design (construing) – helps to identify and refine technology transfer in contemporary industrialized building construction’. Through various references from the construction industry, business theory and architectural practice......This paper argues for a new critical approach to the ways architectural design strategies are developing. Contemporary construction industry appears to evolve into highly specialized and optimized processes driven by industrialized manufacturing, therefore the role of the architect...... and the understanding of the architectural design process ought to be revised. The paper is based on the following underlying hypothesis: ‘Tectonic thinking – defined as a central attention towards the nature, the properties, and the application of building materials (construction) and how this attention forms...

  11. Neural networks for tracking of unknown SISO discrete-time nonlinear dynamic systems.

    Science.gov (United States)

    Aftab, Muhammad Saleheen; Shafiq, Muhammad

    2015-11-01

    This article presents a Lyapunov function based neural network tracking (LNT) strategy for single-input, single-output (SISO) discrete-time nonlinear dynamic systems. The proposed LNT architecture is composed of two feedforward neural networks operating as controller and estimator. A Lyapunov function based back propagation learning algorithm is used for online adjustment of the controller and estimator parameters. The controller and estimator error convergence and closed-loop system stability analysis is performed by Lyapunov stability theory. Moreover, two simulation examples and one real-time experiment are investigated as case studies. The achieved results successfully validate the controller performance. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  12. Cooperating attackers in neural cryptography.

    Science.gov (United States)

    Shacham, Lanir N; Klein, Einat; Mislovaty, Rachel; Kanter, Ido; Kinzel, Wolfgang

    2004-06-01

    A successful attack strategy in neural cryptography is presented. The neural cryptosystem, based on synchronization of neural networks by mutual learning, has been recently shown to be secure under different attack strategies. The success of the advanced attacker presented here, called the "majority-flipping attacker," does not decay with the parameters of the model. This attacker's outstanding success is due to its using a group of attackers which cooperate throughout the synchronization process, unlike any other attack strategy known. An analytical description of this attack is also presented, and fits the results of simulations.

  13. Layered Ensemble Architecture for Time Series Forecasting.

    Science.gov (United States)

    Rahman, Md Mustafizur; Islam, Md Monirul; Murase, Kazuyuki; Yao, Xin

    2016-01-01

    Time series forecasting (TSF) has been widely used in many application areas such as science, engineering, and finance. The phenomena generating time series are usually unknown and information available for forecasting is only limited to the past values of the series. It is, therefore, necessary to use an appropriate number of past values, termed lag, for forecasting. This paper proposes a layered ensemble architecture (LEA) for TSF problems. Our LEA consists of two layers, each of which uses an ensemble of multilayer perceptron (MLP) networks. While the first ensemble layer tries to find an appropriate lag, the second ensemble layer employs the obtained lag for forecasting. Unlike most previous work on TSF, the proposed architecture considers both accuracy and diversity of the individual networks in constructing an ensemble. LEA trains different networks in the ensemble by using different training sets with an aim of maintaining diversity among the networks. However, it uses the appropriate lag and combines the best trained networks to construct the ensemble. This indicates LEAs emphasis on accuracy of the networks. The proposed architecture has been tested extensively on time series data of neural network (NN)3 and NN5 competitions. It has also been tested on several standard benchmark time series data. In terms of forecasting accuracy, our experimental results have revealed clearly that LEA is better than other ensemble and nonensemble methods.

  14. Incidents Prediction in Road Junctions Using Artificial Neural Networks

    Science.gov (United States)

    Hajji, Tarik; Alami Hassani, Aicha; Ouazzani Jamil, Mohammed

    2018-05-01

    The implementation of an incident detection system (IDS) is an indispensable operation in the analysis of the road traffics. However the IDS may, in no case, represent an alternative to the classical monitoring system controlled by the human eye. The aim of this work is to increase detection and prediction probability of incidents in camera-monitored areas. Knowing that, these areas are monitored by multiple cameras and few supervisors. Our solution is to use Artificial Neural Networks (ANN) to analyze moving objects trajectories on captured images. We first propose a modelling of the trajectories and their characteristics, after we develop a learning database for valid and invalid trajectories, and then we carry out a comparative study to find the artificial neural network architecture that maximizes the rate of valid and invalid trajectories recognition.

  15. Language Learning Enhanced by Massive Multiple Online Role-Playing Games (MMORPGs) and the Underlying Behavioral and Neural Mechanisms

    Science.gov (United States)

    Zhang, Yongjun; Song, Hongwen; Liu, Xiaoming; Tang, Dinghong; Chen, Yue-e; Zhang, Xiaochu

    2017-01-01

    Massive Multiple Online Role-Playing Games (MMORPGs) have increased in popularity among children, juveniles, and adults since MMORPGs’ appearance in this digital age. MMORPGs can be applied to enhancing language learning, which is drawing researchers’ attention from different fields and many studies have validated MMORPGs’ positive effect on language learning. However, there are few studies on the underlying behavioral or neural mechanism of such effect. This paper reviews the educational application of the MMORPGs based on relevant macroscopic and microscopic studies, showing that gamers’ overall language proficiency or some specific language skills can be enhanced by real-time online interaction with peers and game narratives or instructions embedded in the MMORPGs. Mechanisms underlying the educational assistant role of MMORPGs in second language learning are discussed from both behavioral and neural perspectives. We suggest that attentional bias makes gamers/learners allocate more cognitive resources toward task-related stimuli in a controlled or an automatic way. Moreover, with a moderating role played by activation of reward circuit, playing the MMORPGs may strengthen or increase functional connectivity from seed regions such as left anterior insular/frontal operculum (AI/FO) and visual word form area to other language-related brain areas. PMID:28303097

  16. Language Learning Enhanced by Massive Multiple Online Role-Playing Games (MMORPGs) and the Underlying Behavioral and Neural Mechanisms.

    Science.gov (United States)

    Zhang, Yongjun; Song, Hongwen; Liu, Xiaoming; Tang, Dinghong; Chen, Yue-E; Zhang, Xiaochu

    2017-01-01

    Massive Multiple Online Role-Playing Games (MMORPGs) have increased in popularity among children, juveniles, and adults since MMORPGs' appearance in this digital age. MMORPGs can be applied to enhancing language learning, which is drawing researchers' attention from different fields and many studies have validated MMORPGs' positive effect on language learning. However, there are few studies on the underlying behavioral or neural mechanism of such effect. This paper reviews the educational application of the MMORPGs based on relevant macroscopic and microscopic studies, showing that gamers' overall language proficiency or some specific language skills can be enhanced by real-time online interaction with peers and game narratives or instructions embedded in the MMORPGs. Mechanisms underlying the educational assistant role of MMORPGs in second language learning are discussed from both behavioral and neural perspectives. We suggest that attentional bias makes gamers/learners allocate more cognitive resources toward task-related stimuli in a controlled or an automatic way. Moreover, with a moderating role played by activation of reward circuit, playing the MMORPGs may strengthen or increase functional connectivity from seed regions such as left anterior insular/frontal operculum (AI/FO) and visual word form area to other language-related brain areas.

  17. Architecture of Brazil 1900-1990

    CERN Document Server

    Segawa, Hugo

    2013-01-01

    Architecture of Brazil: 1900-1990 examines the processes that underpin modern Brazilian architecture under various influences and characterizes different understandings of modernity, evident in the chapter topics of this book. Accordingly, the author does not give overall preference to particular architects nor works, with the exception of a few specific works and architects, including Warchavchik, Niemeyer, Lucio Costa, and Vilanova Artigas. In summary, this book: Meticulously examines the controversies, achievements, and failures in constructing spaces, buildings, and cities in a dynamic country Gives a broad view of Brazilian architecture in the twentieth century Proposes a reinterpretation of the varied approaches of the modern movement up to the Second World War Analyzes ideological impacts of important Brazilian architects including Oscar Niemeyer, Lucio Costa and Vilanova Artigas Discusses work of expatriate architects in Brazil Features over 140 illustrations In Architecture of Brazil: 1900-1990, S...

  18. Perceptual asymmetry reveals neural substrates underlying stereoscopic transparency.

    Science.gov (United States)

    Tsirlin, Inna; Allison, Robert S; Wilcox, Laurie M

    2012-02-01

    We describe a perceptual asymmetry found in stereoscopic perception of overlaid random-dot surfaces. Specifically, the minimum separation in depth needed to perceptually segregate two overlaid surfaces depended on the distribution of dots across the surfaces. With the total dot density fixed, significantly larger inter-plane disparities were required for perceptual segregation of the surfaces when the front surface had fewer dots than the back surface compared to when the back surface was the one with fewer dots. We propose that our results reflect an asymmetry in the signal strength of the front and back surfaces due to the assignment of the spaces between the dots to the back surface by disparity interpolation. This hypothesis was supported by the results of two experiments designed to reduce the imbalance in the neuronal response to the two surfaces. We modeled the psychophysical data with a network of inter-neural connections: excitatory within-disparity and inhibitory across disparity, where the spread of disparity was modulated according to figure-ground assignment. These psychophysical and computational findings suggest that stereoscopic transparency depends on both inter-neural interactions of disparity-tuned cells and higher-level processes governing figure ground segregation. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. Modeling of methane emissions using artificial neural network approach

    Directory of Open Access Journals (Sweden)

    Stamenković Lidija J.

    2015-01-01

    Full Text Available The aim of this study was to develop a model for forecasting CH4 emissions at the national level, using Artificial Neural Networks (ANN with broadly available sustainability, economical and industrial indicators as their inputs. ANN modeling was performed using two different types of architecture; a Backpropagation Neural Network (BPNN and a General Regression Neural Network (GRNN. A conventional multiple linear regression (MLR model was also developed in order to compare model performance and assess which model provides the best results. ANN and MLR models were developed and tested using the same annual data for 20 European countries. The ANN model demonstrated very good performance, significantly better than the MLR model. It was shown that a forecast of CH4 emissions at the national level using the ANN model can be made successfully and accurately for a future period of up to two years, thereby opening the possibility to apply such a modeling technique which can be used to support the implementation of sustainable development strategies and environmental management policies. [Projekat Ministarstva nauke Republike Srbije, br. 172007

  20. Parameter estimation in space systems using recurrent neural networks

    Science.gov (United States)

    Parlos, Alexander G.; Atiya, Amir F.; Sunkel, John W.

    1991-01-01

    The identification of time-varying parameters encountered in space systems is addressed, using artificial neural systems. A hybrid feedforward/feedback neural network, namely a recurrent multilayer perception, is used as the model structure in the nonlinear system identification. The feedforward portion of the network architecture provides its well-known interpolation property, while through recurrency and cross-talk, the local information feedback enables representation of temporal variations in the system nonlinearities. The standard back-propagation-learning algorithm is modified and it is used for both the off-line and on-line supervised training of the proposed hybrid network. The performance of recurrent multilayer perceptron networks in identifying parameters of nonlinear dynamic systems is investigated by estimating the mass properties of a representative large spacecraft. The changes in the spacecraft inertia are predicted using a trained neural network, during two configurations corresponding to the early and late stages of the spacecraft on-orbit assembly sequence. The proposed on-line mass properties estimation capability offers encouraging results, though, further research is warranted for training and testing the predictive capabilities of these networks beyond nominal spacecraft operations.

  1. Protein Secondary Structure Prediction Using Deep Convolutional Neural Fields.

    Science.gov (United States)

    Wang, Sheng; Peng, Jian; Ma, Jianzhu; Xu, Jinbo

    2016-01-11

    Protein secondary structure (SS) prediction is important for studying protein structure and function. When only the sequence (profile) information is used as input feature, currently the best predictors can obtain ~80% Q3 accuracy, which has not been improved in the past decade. Here we present DeepCNF (Deep Convolutional Neural Fields) for protein SS prediction. DeepCNF is a Deep Learning extension of Conditional Neural Fields (CNF), which is an integration of Conditional Random Fields (CRF) and shallow neural networks. DeepCNF can model not only complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent SS labels, so it is much more powerful than CNF. Experimental results show that DeepCNF can obtain ~84% Q3 accuracy, ~85% SOV score, and ~72% Q8 accuracy, respectively, on the CASP and CAMEO test proteins, greatly outperforming currently popular predictors. As a general framework, DeepCNF can be used to predict other protein structure properties such as contact number, disorder regions, and solvent accessibility.

  2. A neural network model of ventriloquism effect and aftereffect.

    Directory of Open Access Journals (Sweden)

    Elisa Magosso

    Full Text Available Presenting simultaneous but spatially discrepant visual and auditory stimuli induces a perceptual translocation of the sound towards the visual input, the ventriloquism effect. General explanation is that vision tends to dominate over audition because of its higher spatial reliability. The underlying neural mechanisms remain unclear. We address this question via a biologically inspired neural network. The model contains two layers of unimodal visual and auditory neurons, with visual neurons having higher spatial resolution than auditory ones. Neurons within each layer communicate via lateral intra-layer synapses; neurons across layers are connected via inter-layer connections. The network accounts for the ventriloquism effect, ascribing it to a positive feedback between the visual and auditory neurons, triggered by residual auditory activity at the position of the visual stimulus. Main results are: i the less localized stimulus is strongly biased toward the most localized stimulus and not vice versa; ii amount of the ventriloquism effect changes with visual-auditory spatial disparity; iii ventriloquism is a robust behavior of the network with respect to parameter value changes. Moreover, the model implements Hebbian rules for potentiation and depression of lateral synapses, to explain ventriloquism aftereffect (that is, the enduring sound shift after exposure to spatially disparate audio-visual stimuli. By adaptively changing the weights of lateral synapses during cross-modal stimulation, the model produces post-adaptive shifts of auditory localization that agree with in-vivo observations. The model demonstrates that two unimodal layers reciprocally interconnected may explain ventriloquism effect and aftereffect, even without the presence of any convergent multimodal area. The proposed study may provide advancement in understanding neural architecture and mechanisms at the basis of visual-auditory integration in the spatial realm.

  3. Neural networks and their potential application to nuclear power plants

    International Nuclear Information System (INIS)

    Uhrig, R.E.

    1991-01-01

    A network of artificial neurons, usually called an artificial neural network is a data processing system consisting of a number of highly interconnected processing elements in an architecture inspired by the structure of the cerebral cortex portion of the brain. Hence, neural networks are often capable of doing things which humans or animals do well but which conventional computers often do poorly. Neural networks exhibit characteristics and capabilities not provided by any other technology. Neural networks may be designed so as to classify an input pattern as one of several predefined types or to create, as needed, categories or classes of system states which can be interpreted by a human operator. Neural networks have the ability to recognize patterns, even when the information comprising these patterns is noisy, sparse, or incomplete. Thus, systems of artificial neural networks show great promise for use in environments in which robust, fault-tolerant pattern recognition is necessary in a real-time mode, and in which the incoming data may be distorted or noisy. The application of neural networks, a rapidly evolving technology used extensively in defense applications, alone or in conjunction with other advanced technologies, to some of the problems of operating nuclear power plants has the potential to enhance the safety, reliability and operability of nuclear power plants. The potential applications of neural networking include, but are not limited to diagnosing specific abnormal conditions, identification of nonlinear dynamics and transients, detection of the change of mode of operation, control of temperature and pressure during start-up, signal validation, plant-wide monitoring using autoassociative neural networks, monitoring of check valves, modeling of the plant thermodynamics, emulation of core reload calculations, analysis of temporal sequences in NRC's ''licensee event reports,'' and monitoring of plant parameters

  4. Natural language acquisition in large scale neural semantic networks

    Science.gov (United States)

    Ealey, Douglas

    This thesis puts forward the view that a purely signal- based approach to natural language processing is both plausible and desirable. By questioning the veracity of symbolic representations of meaning, it argues for a unified, non-symbolic model of knowledge representation that is both biologically plausible and, potentially, highly efficient. Processes to generate a grounded, neural form of this model-dubbed the semantic filter-are discussed. The combined effects of local neural organisation, coincident with perceptual maturation, are used to hypothesise its nature. This theoretical model is then validated in light of a number of fundamental neurological constraints and milestones. The mechanisms of semantic and episodic development that the model predicts are then used to explain linguistic properties, such as propositions and verbs, syntax and scripting. To mimic the growth of locally densely connected structures upon an unbounded neural substrate, a system is developed that can grow arbitrarily large, data- dependant structures composed of individual self- organising neural networks. The maturational nature of the data used results in a structure in which the perception of concepts is refined by the networks, but demarcated by subsequent structure. As a consequence, the overall structure shows significant memory and computational benefits, as predicted by the cognitive and neural models. Furthermore, the localised nature of the neural architecture also avoids the increasing error sensitivity and redundancy of traditional systems as the training domain grows. The semantic and episodic filters have been demonstrated to perform as well, or better, than more specialist networks, whilst using significantly larger vocabularies, more complex sentence forms and more natural corpora.

  5. Neural Network Machine Learning and Dimension Reduction for Data Visualization

    Science.gov (United States)

    Liles, Charles A.

    2014-01-01

    Neural network machine learning in computer science is a continuously developing field of study. Although neural network models have been developed which can accurately predict a numeric value or nominal classification, a general purpose method for constructing neural network architecture has yet to be developed. Computer scientists are often forced to rely on a trial-and-error process of developing and improving accurate neural network models. In many cases, models are constructed from a large number of input parameters. Understanding which input parameters have the greatest impact on the prediction of the model is often difficult to surmise, especially when the number of input variables is very high. This challenge is often labeled the "curse of dimensionality" in scientific fields. However, techniques exist for reducing the dimensionality of problems to just two dimensions. Once a problem's dimensions have been mapped to two dimensions, it can be easily plotted and understood by humans. The ability to visualize a multi-dimensional dataset can provide a means of identifying which input variables have the highest effect on determining a nominal or numeric output. Identifying these variables can provide a better means of training neural network models; models can be more easily and quickly trained using only input variables which appear to affect the outcome variable. The purpose of this project is to explore varying means of training neural networks and to utilize dimensional reduction for visualizing and understanding complex datasets.

  6. The neural basis of loss aversion in decision-making under risk.

    Science.gov (United States)

    Tom, Sabrina M; Fox, Craig R; Trepel, Christopher; Poldrack, Russell A

    2007-01-26

    People typically exhibit greater sensitivity to losses than to equivalent gains when making decisions. We investigated neural correlates of loss aversion while individuals decided whether to accept or reject gambles that offered a 50/50 chance of gaining or losing money. A broad set of areas (including midbrain dopaminergic regions and their targets) showed increasing activity as potential gains increased. Potential losses were represented by decreasing activity in several of these same gain-sensitive areas. Finally, individual differences in behavioral loss aversion were predicted by a measure of neural loss aversion in several regions, including the ventral striatum and prefrontal cortex.

  7. A maize introgression library reveals ample genetic variability for root architecture, water use efficiency and grain yield under different water regimes

    OpenAIRE

    Salvi, S.; Giuliani, S.; Cané, M.; Sciara, G.; Bovina, R.; Welcker, Claude; Cabrera Bosquet, Llorenç; Grau, Antonin; Tardieu, Francois; Meriggi, P.

    2015-01-01

    The genetic dissection of root system architecture (RSA) provides valuable opportunities towards a better understanding of its role in determining yield under different water regimes. To this end, a maize introgression library comprised of 75 BC5 lines derived from the cross between Gaspé Flint (an early line; donor parent) and B73 (an elite line; recurrent parent) were evaluated in two experiments conducted under well-watered and water-deficit conditions (WW and WD, respectively) in order to...

  8. Development of efficiency module of organization of Arctic sea cargo transportation with application of neural network technologies

    Science.gov (United States)

    Sobolevskaya, E. Yu; Glushkov, S. V.; Levchenko, N. G.; Orlov, A. P.

    2018-05-01

    The analysis of software intended for organizing and managing the processes of sea cargo transportation has been carried out. The shortcomings of information resources are presented, for the organization of work in the Arctic and Subarctic regions of the Far East: the lack of decision support systems, the lack of factor analysis to calculate the time and cost of delivery. The architecture of the module for calculating the effectiveness of the organization of sea cargo transportation has been developed. The simulation process has been considered, which is based on the neural network. The main classification factors with their weighting coefficients have been identified. The architecture of the neural network has been developed to calculate the efficiency of the organization of sea cargo transportation in Arctic conditions. The architecture of the intellectual system of organization of sea cargo transportation has been developed, taking into account the difficult navigation conditions in the Arctic. Its implementation will allow one to provide the management of the shipping company with predictive analytics; to support decision-making; to calculate the most efficient delivery route; to provide on demand online transportation forecast, to minimize the shipping cost, delays in transit, and risks to cargo safety.

  9. The role of stochasticity in an information-optimal neural population code

    Energy Technology Data Exchange (ETDEWEB)

    Stocks, N G; Nikitin, A P [School of Engineering, University of Warwick, Coventry CV4 7AL (United Kingdom); McDonnell, M D [Institute for Telecommunications Research, University of South Australia, SA 5095 (Australia); Morse, R P, E-mail: n.g.stocks@warwick.ac.u [School of Life and Health Sciences, Aston University, Birmingham B4 7ET (United Kingdom)

    2009-12-01

    In this paper we consider the optimisation of Shannon mutual information (MI) in the context of two model neural systems. The first is a stochastic pooling network (population) of McCulloch-Pitts (MP) type neurons (logical threshold units) subject to stochastic forcing; the second is (in a rate coding paradigm) a population of neurons that each displays Poisson statistics (the so called 'Poisson neuron'). The mutual information is optimised as a function of a parameter that characterises the 'noise level'-in the MP array this parameter is the standard deviation of the noise; in the population of Poisson neurons it is the window length used to determine the spike count. In both systems we find that the emergent neural architecture and, hence, code that maximises the MI is strongly influenced by the noise level. Low noise levels leads to a heterogeneous distribution of neural parameters (diversity), whereas, medium to high noise levels result in the clustering of neural parameters into distinct groups that can be interpreted as subpopulations. In both cases the number of subpopulations increases with a decrease in noise level. Our results suggest that subpopulations are a generic feature of an information optimal neural population.

  10. QCD-Aware Neural Networks for Jet Physics

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Recent progress in applying machine learning for jet physics has been built upon an analogy between calorimeters and images. In this work, we present a novel class of recursive neural networks built instead upon an analogy between QCD and natural languages. In the analogy, four-momenta are like words and the clustering history of sequential recombination jet algorithms is like the parsing of a sentence. Our approach works directly with the four-momenta of a variable-length set of particles, and the jet-based neural network topology varies on an event-by-event basis. Our experiments highlight the flexibility of our method for building task-specific jet embeddings and show that recursive architectures are significantly more accurate and data efficient than previous image-based networks. We extend the analogy from individual jets (sentences) to full events (paragraphs), and show for the first time an event-level classifier operating...

  11. Using Hybrid Algorithm to Improve Intrusion Detection in Multi Layer Feed Forward Neural Networks

    Science.gov (United States)

    Ray, Loye Lynn

    2014-01-01

    The need for detecting malicious behavior on a computer networks continued to be important to maintaining a safe and secure environment. The purpose of this study was to determine the relationship of multilayer feed forward neural network architecture to the ability of detecting abnormal behavior in networks. This involved building, training, and…

  12. A computer architecture for intelligent machines

    Science.gov (United States)

    Lefebvre, D. R.; Saridis, G. N.

    1992-01-01

    The theory of intelligent machines proposes a hierarchical organization for the functions of an autonomous robot based on the principle of increasing precision with decreasing intelligence. An analytic formulation of this theory using information-theoretic measures of uncertainty for each level of the intelligent machine has been developed. The authors present a computer architecture that implements the lower two levels of the intelligent machine. The architecture supports an event-driven programming paradigm that is independent of the underlying computer architecture and operating system. Execution-level controllers for motion and vision systems are briefly addressed, as well as the Petri net transducer software used to implement coordination-level functions. A case study illustrates how this computer architecture integrates real-time and higher-level control of manipulator and vision systems.

  13. In Situ Representations and Access Consciousness in Neural Blackboard or Workspace Architectures

    OpenAIRE

    Frank van der Velde

    2018-01-01

    Phenomenal theories of consciousness assert that consciousness is based on specific neural correlates in the brain, which can be separated from all cognitive functions we can perform. If so, the search for robot consciousness seems to be doomed. By contrast, theories of functional or access consciousness assert that consciousness can be studied only with forms of cognitive access, given by cognitive processes. Consequently, consciousness and cognitive access cannot be fully dissociated. Here,...

  14. Non-invasive neural stimulation

    Science.gov (United States)

    Tyler, William J.; Sanguinetti, Joseph L.; Fini, Maria; Hool, Nicholas

    2017-05-01

    Neurotechnologies for non-invasively interfacing with neural circuits have been evolving from those capable of sensing neural activity to those capable of restoring and enhancing human brain function. Generally referred to as non-invasive neural stimulation (NINS) methods, these neuromodulation approaches rely on electrical, magnetic, photonic, and acoustic or ultrasonic energy to influence nervous system activity, brain function, and behavior. Evidence that has been surmounting for decades shows that advanced neural engineering of NINS technologies will indeed transform the way humans treat diseases, interact with information, communicate, and learn. The physics underlying the ability of various NINS methods to modulate nervous system activity can be quite different from one another depending on the energy modality used as we briefly discuss. For members of commercial and defense industry sectors that have not traditionally engaged in neuroscience research and development, the science, engineering and technology required to advance NINS methods beyond the state-of-the-art presents tremendous opportunities. Within the past few years alone there have been large increases in global investments made by federal agencies, foundations, private investors and multinational corporations to develop advanced applications of NINS technologies. Driven by these efforts NINS methods and devices have recently been introduced to mass markets via the consumer electronics industry. Further, NINS continues to be explored in a growing number of defense applications focused on enhancing human dimensions. The present paper provides a brief introduction to the field of non-invasive neural stimulation by highlighting some of the more common methods in use or under current development today.

  15. Modulating conscious movement intention by noninvasive brain stimulation and the underlying neural mechanisms.

    Science.gov (United States)

    Douglas, Zachary H; Maniscalco, Brian; Hallett, Mark; Wassermann, Eric M; He, Biyu J

    2015-05-06

    Conscious intention is a fundamental aspect of the human experience. Despite long-standing interest in the basis and implications of intention, its underlying neurobiological mechanisms remain poorly understood. Using high-definition transcranial DC stimulation (tDCS), we observed that enhancing spontaneous neuronal excitability in both the angular gyrus and the primary motor cortex caused the reported time of conscious movement intention to be ∼60-70 ms earlier. Slow brain waves recorded ∼2-3 s before movement onset, as well as hundreds of milliseconds after movement onset, independently correlated with the modulation of conscious intention by brain stimulation. These brain activities together accounted for 81% of interindividual variability in the modulation of movement intention by brain stimulation. A computational model using coupled leaky integrator units with biophysically plausible assumptions about the effect of tDCS captured the effects of stimulation on both neural activity and behavior. These results reveal a temporally extended brain process underlying conscious movement intention that spans seconds around movement commencement. Copyright © 2015 Douglas et al.

  16. Hybrid Neural Network Approach Based Tool for the Modelling of Photovoltaic Panels

    Directory of Open Access Journals (Sweden)

    Antonino Laudani

    2015-01-01

    Full Text Available A hybrid neural network approach based tool for identifying the photovoltaic one-diode model is presented. The generalization capabilities of neural networks are used together with the robustness of the reduced form of one-diode model. Indeed, from the studies performed by the authors and the works present in the literature, it was found that a direct computation of the five parameters via multiple inputs and multiple outputs neural network is a very difficult task. The reduced form consists in a series of explicit formulae for the support to the neural network that, in our case, is aimed at predicting just two parameters among the five ones identifying the model: the other three parameters are computed by reduced form. The present hybrid approach is efficient from the computational cost point of view and accurate in the estimation of the five parameters. It constitutes a complete and extremely easy tool suitable to be implemented in a microcontroller based architecture. Validations are made on about 10000 PV panels belonging to the California Energy Commission database.

  17. Performance anomaly detection in microservice architectures under continuous change

    OpenAIRE

    Düllmann, Thomas F.

    2017-01-01

    The idea of DevOps and agile approaches like Continuous Integration (CI) and microservice architectures are bocoming more and more popular as the demand for flexible and scalable solutions is increasing. By raising the degree of automation and distribution new challenges in terms of application performance monitoring arise because microservices are possibly short-lived and may be replaced within seconds. The fact that microservices are added and removed on a regular basis brings new requireme...

  18. Application of Artificial Neural Network to Predict the use of Runway at Juanda International Airport

    Science.gov (United States)

    Putra, J. C. P.; Safrilah

    2017-06-01

    Artificial neural network approaches are useful to solve many complicated problems. It solves a number of problems in various areas such as engineering, medicine, business, manufacturing, etc. This paper presents an application of artificial neural network to predict a runway capacity at Juanda International Airport. An artificial neural network model of backpropagation and multi-layer perceptron is adopted to this research to learning process of runway capacity at Juanda International Airport. The results indicate that the training data is successfully recognizing the certain pattern of runway use at Juanda International Airport. Whereas, testing data indicate vice versa. Finally, it can be concluded that the approach of uniformity data and network architecture is the critical part to determine the accuracy of prediction results.

  19. Architectural slicing

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak; Hansen, Klaus Marius

    2013-01-01

    Architectural prototyping is a widely used practice, con- cerned with taking architectural decisions through experiments with light- weight implementations. However, many architectural decisions are only taken when systems are already (partially) implemented. This is prob- lematic in the context...... of architectural prototyping since experiments with full systems are complex and expensive and thus architectural learn- ing is hindered. In this paper, we propose a novel technique for harvest- ing architectural prototypes from existing systems, \\architectural slic- ing", based on dynamic program slicing. Given...... a system and a slicing criterion, architectural slicing produces an architectural prototype that contain the elements in the architecture that are dependent on the ele- ments in the slicing criterion. Furthermore, we present an initial design and implementation of an architectural slicer for Java....

  20. Neural Mechanisms of Updating under Reducible and Irreducible Uncertainty.

    Science.gov (United States)

    Kobayashi, Kenji; Hsu, Ming

    2017-07-19

    Adaptive decision making depends on an agent's ability to use environmental signals to reduce uncertainty. However, because of multiple types of uncertainty, agents must take into account not only the extent to which signals violate prior expectations but also whether uncertainty can be reduced in the first place. Here we studied how human brains of both sexes respond to signals under conditions of reducible and irreducible uncertainty. We show behaviorally that subjects' value updating was sensitive to the reducibility of uncertainty, and could be quantitatively characterized by a Bayesian model where agents ignore expectancy violations that do not update beliefs or values. Using fMRI, we found that neural processes underlying belief and value updating were separable from responses to expectancy violation, and that reducibility of uncertainty in value modulated connections from belief-updating regions to value-updating regions. Together, these results provide insights into how agents use knowledge about uncertainty to make better decisions while ignoring mere expectancy violation. SIGNIFICANCE STATEMENT To make good decisions, a person must observe the environment carefully, and use these observations to reduce uncertainty about consequences of actions. Importantly, uncertainty should not be reduced purely based on how surprising the observations are, particularly because in some cases uncertainty is not reducible. Here we show that the human brain indeed reduces uncertainty adaptively by taking into account the nature of uncertainty and ignoring mere surprise. Behaviorally, we show that human subjects reduce uncertainty in a quasioptimal Bayesian manner. Using fMRI, we characterize brain regions that may be involved in uncertainty reduction, as well as the network they constitute, and dissociate them from brain regions that respond to mere surprise. Copyright © 2017 the authors 0270-6474/17/376972-11$15.00/0.