WorldWideScience

Sample records for neural network activation

  1. Neural networks with discontinuous/impact activations

    CERN Document Server

    Akhmet, Marat

    2014-01-01

    This book presents as its main subject new models in mathematical neuroscience. A wide range of neural networks models with discontinuities are discussed, including impulsive differential equations, differential equations with piecewise constant arguments, and models of mixed type. These models involve discontinuities, which are natural because huge velocities and short distances are usually observed in devices modeling the networks. A discussion of the models, appropriate for the proposed applications, is also provided. This book also: Explores questions related to the biological underpinning for models of neural networks\\ Considers neural networks modeling using differential equations with impulsive and piecewise constant argument discontinuities Provides all necessary mathematical basics for application to the theory of neural networks Neural Networks with Discontinuous/Impact Activations is an ideal book for researchers and professionals in the field of engineering mathematics that have an interest in app...

  2. Cultured Neural Networks: Optimization of Patterned Network Adhesiveness and Characterization of their Neural Activity

    Directory of Open Access Journals (Sweden)

    W. L. C. Rutten

    2006-01-01

    Full Text Available One type of future, improved neural interface is the “cultured probe”. It is a hybrid type of neural information transducer or prosthesis, for stimulation and/or recording of neural activity. It would consist of a microelectrode array (MEA on a planar substrate, each electrode being covered and surrounded by a local circularly confined network (“island” of cultured neurons. The main purpose of the local networks is that they act as biofriendly intermediates for collateral sprouts from the in vivo system, thus allowing for an effective and selective neuron–electrode interface. As a secondary purpose, one may envisage future information processing applications of these intermediary networks. In this paper, first, progress is shown on how substrates can be chemically modified to confine developing networks, cultured from dissociated rat cortex cells, to “islands” surrounding an electrode site. Additional coating of neurophobic, polyimide-coated substrate by triblock-copolymer coating enhances neurophilic-neurophobic adhesion contrast. Secondly, results are given on neuronal activity in patterned, unconnected and connected, circular “island” networks. For connected islands, the larger the island diameter (50, 100 or 150 μm, the more spontaneous activity is seen. Also, activity may show a very high degree of synchronization between two islands. For unconnected islands, activity may start at 22 days in vitro (DIV, which is two weeks later than in unpatterned networks.

  3. Influence of neural adaptation on dynamics and equilibrium state of neural activities in a ring neural network

    Science.gov (United States)

    Takiyama, Ken

    2017-12-01

    How neural adaptation affects neural information processing (i.e. the dynamics and equilibrium state of neural activities) is a central question in computational neuroscience. In my previous works, I analytically clarified the dynamics and equilibrium state of neural activities in a ring-type neural network model that is widely used to model the visual cortex, motor cortex, and several other brain regions. The neural dynamics and the equilibrium state in the neural network model corresponded to a Bayesian computation and statistically optimal multiple information integration, respectively, under a biologically inspired condition. These results were revealed in an analytically tractable manner; however, adaptation effects were not considered. Here, I analytically reveal how the dynamics and equilibrium state of neural activities in a ring neural network are influenced by spike-frequency adaptation (SFA). SFA is an adaptation that causes gradual inhibition of neural activity when a sustained stimulus is applied, and the strength of this inhibition depends on neural activities. I reveal that SFA plays three roles: (1) SFA amplifies the influence of external input in neural dynamics; (2) SFA allows the history of the external input to affect neural dynamics; and (3) the equilibrium state corresponds to the statistically optimal multiple information integration independent of the existence of SFA. In addition, the equilibrium state in a ring neural network model corresponds to the statistically optimal integration of multiple information sources under biologically inspired conditions, independent of the existence of SFA.

  4. Active Engine Mounting Control Algorithm Using Neural Network

    Directory of Open Access Journals (Sweden)

    Fadly Jashi Darsivan

    2009-01-01

    Full Text Available This paper proposes the application of neural network as a controller to isolate engine vibration in an active engine mounting system. It has been shown that the NARMA-L2 neurocontroller has the ability to reject disturbances from a plant. The disturbance is assumed to be both impulse and sinusoidal disturbances that are induced by the engine. The performance of the neural network controller is compared with conventional PD and PID controllers tuned using Ziegler-Nichols. From the result simulated the neural network controller has shown better ability to isolate the engine vibration than the conventional controllers.

  5. Patterns recognition of electric brain activity using artificial neural networks

    Science.gov (United States)

    Musatov, V. Yu.; Pchelintseva, S. V.; Runnova, A. E.; Hramov, A. E.

    2017-04-01

    An approach for the recognition of various cognitive processes in the brain activity in the perception of ambiguous images. On the basis of developed theoretical background and the experimental data, we propose a new classification of oscillating patterns in the human EEG by using an artificial neural network approach. After learning of the artificial neural network reliably identified cube recognition processes, for example, left-handed or right-oriented Necker cube with different intensity of their edges, construct an artificial neural network based on Perceptron architecture and demonstrate its effectiveness in the pattern recognition of the EEG in the experimental.

  6. High Accuracy Human Activity Monitoring using Neural network

    OpenAIRE

    Sharma, Annapurna; Lee, Young-Dong; Chung, Wan-Young

    2011-01-01

    This paper presents the designing of a neural network for the classification of Human activity. A Triaxial accelerometer sensor, housed in a chest worn sensor unit, has been used for capturing the acceleration of the movements associated. All the three axis acceleration data were collected at a base station PC via a CC2420 2.4GHz ISM band radio (zigbee wireless compliant), processed and classified using MATLAB. A neural network approach for classification was used with an eye on theoretical a...

  7. High solar activity predictions through an artificial neural network

    Science.gov (United States)

    Orozco-Del-Castillo, M. G.; Ortiz-Alemán, J. C.; Couder-Castañeda, C.; Hernández-Gómez, J. J.; Solís-Santomé, A.

    The effects of high-energy particles coming from the Sun on human health as well as in the integrity of outer space electronics make the prediction of periods of high solar activity (HSA) a task of significant importance. Since periodicities in solar indexes have been identified, long-term predictions can be achieved. In this paper, we present a method based on an artificial neural network to find a pattern in some harmonics which represent such periodicities. We used data from 1973 to 2010 to train the neural network, and different historical data for its validation. We also used the neural network along with a statistical analysis of its performance with known data to predict periods of HSA with different confidence intervals according to the three-sigma rule associated with solar cycles 24-26, which we found to occur before 2040.

  8. Persistent activity in neural networks with dynamic synapses.

    Directory of Open Access Journals (Sweden)

    Omri Barak

    2007-02-01

    Full Text Available Persistent activity states (attractors, observed in several neocortical areas after the removal of a sensory stimulus, are believed to be the neuronal basis of working memory. One of the possible mechanisms that can underlie persistent activity is recurrent excitation mediated by intracortical synaptic connections. A recent experimental study revealed that connections between pyramidal cells in prefrontal cortex exhibit various degrees of synaptic depression and facilitation. Here we analyze the effect of synaptic dynamics on the emergence and persistence of attractor states in interconnected neural networks. We show that different combinations of synaptic depression and facilitation result in qualitatively different network dynamics with respect to the emergence of the attractor states. This analysis raises the possibility that the framework of attractor neural networks can be extended to represent time-dependent stimuli.

  9. Cultured neural networks: Optimisation of patterned network adhesiveness and characterisation of their neural activity

    NARCIS (Netherlands)

    Rutten, Wim; Ruardij, T.G.; Marani, Enrico; Roelofsen, B.H.

    2006-01-01

    One type of future, improved neural interface is the "cultured probe"?. It is a hybrid type of neural information transducer or prosthesis, for stimulation and/or recording of neural activity. It would consist of a microelectrode array (MEA) on a planar substrate, each electrode being covered and

  10. The Effects of GABAergic Polarity Changes on Episodic Neural Network Activity in Developing Neural Systems

    Directory of Open Access Journals (Sweden)

    Wilfredo Blanco

    2017-09-01

    Full Text Available Early in development, neural systems have primarily excitatory coupling, where even GABAergic synapses are excitatory. Many of these systems exhibit spontaneous episodes of activity that have been characterized through both experimental and computational studies. As development progress the neural system goes through many changes, including synaptic remodeling, intrinsic plasticity in the ion channel expression, and a transformation of GABAergic synapses from excitatory to inhibitory. What effect each of these, and other, changes have on the network behavior is hard to know from experimental studies since they all happen in parallel. One advantage of a computational approach is that one has the ability to study developmental changes in isolation. Here, we examine the effects of GABAergic synapse polarity change on the spontaneous activity of both a mean field and a neural network model that has both glutamatergic and GABAergic coupling, representative of a developing neural network. We find some intuitive behavioral changes as the GABAergic neurons go from excitatory to inhibitory, shared by both models, such as a decrease in the duration of episodes. We also find some paradoxical changes in the activity that are only present in the neural network model. In particular, we find that during early development the inter-episode durations become longer on average, while later in development they become shorter. In addressing this unexpected finding, we uncover a priming effect that is particularly important for a small subset of neurons, called the “intermediate neurons.” We characterize these neurons and demonstrate why they are crucial to episode initiation, and why the paradoxical behavioral change result from priming of these neurons. The study illustrates how even arguably the simplest of developmental changes that occurs in neural systems can present non-intuitive behaviors. It also makes predictions about neural network behavioral changes

  11. Decorrelation of Neural-Network Activity by Inhibitory Feedback

    Science.gov (United States)

    Einevoll, Gaute T.; Diesmann, Markus

    2012-01-01

    Correlations in spike-train ensembles can seriously impair the encoding of information by their spatio-temporal structure. An inevitable source of correlation in finite neural networks is common presynaptic input to pairs of neurons. Recent studies demonstrate that spike correlations in recurrent neural networks are considerably smaller than expected based on the amount of shared presynaptic input. Here, we explain this observation by means of a linear network model and simulations of networks of leaky integrate-and-fire neurons. We show that inhibitory feedback efficiently suppresses pairwise correlations and, hence, population-rate fluctuations, thereby assigning inhibitory neurons the new role of active decorrelation. We quantify this decorrelation by comparing the responses of the intact recurrent network (feedback system) and systems where the statistics of the feedback channel is perturbed (feedforward system). Manipulations of the feedback statistics can lead to a significant increase in the power and coherence of the population response. In particular, neglecting correlations within the ensemble of feedback channels or between the external stimulus and the feedback amplifies population-rate fluctuations by orders of magnitude. The fluctuation suppression in homogeneous inhibitory networks is explained by a negative feedback loop in the one-dimensional dynamics of the compound activity. Similarly, a change of coordinates exposes an effective negative feedback loop in the compound dynamics of stable excitatory-inhibitory networks. The suppression of input correlations in finite networks is explained by the population averaged correlations in the linear network model: In purely inhibitory networks, shared-input correlations are canceled by negative spike-train correlations. In excitatory-inhibitory networks, spike-train correlations are typically positive. Here, the suppression of input correlations is not a result of the mere existence of correlations between

  12. Death and rebirth of neural activity in sparse inhibitory networks

    Science.gov (United States)

    Angulo-Garcia, David; Luccioli, Stefano; Olmi, Simona; Torcini, Alessandro

    2017-05-01

    Inhibition is a key aspect of neural dynamics playing a fundamental role for the emergence of neural rhythms and the implementation of various information coding strategies. Inhibitory populations are present in several brain structures, and the comprehension of their dynamics is strategical for the understanding of neural processing. In this paper, we clarify the mechanisms underlying a general phenomenon present in pulse-coupled heterogeneous inhibitory networks: inhibition can induce not only suppression of neural activity, as expected, but can also promote neural re-activation. In particular, for globally coupled systems, the number of firing neurons monotonically reduces upon increasing the strength of inhibition (neuronal death). However, the random pruning of connections is able to reverse the action of inhibition, i.e. in a random sparse network a sufficiently strong synaptic strength can surprisingly promote, rather than depress, the activity of neurons (neuronal rebirth). Thus, the number of firing neurons reaches a minimum value at some intermediate synaptic strength. We show that this minimum signals a transition from a regime dominated by neurons with a higher firing activity to a phase where all neurons are effectively sub-threshold and their irregular firing is driven by current fluctuations. We explain the origin of the transition by deriving a mean field formulation of the problem able to provide the fraction of active neurons as well as the first two moments of their firing statistics. The introduction of a synaptic time scale does not modify the main aspects of the reported phenomenon. However, for sufficiently slow synapses the transition becomes dramatic, and the system passes from a perfectly regular evolution to irregular bursting dynamics. In this latter regime the model provides predictions consistent with experimental findings for a specific class of neurons, namely the medium spiny neurons in the striatum.

  13. Activity Patterns of Cultured Neural Networks on Micro Electrode Arrays

    National Research Council Canada - National Science Library

    Rutten, Wim

    2001-01-01

    A hybrid neuro-electronic interface is a cell-cultured micro electrode array, acting as a neural information transducer for stimulation and/or recording of neural activity in the brain or the spinal cord...

  14. Neural Networks

    Directory of Open Access Journals (Sweden)

    Schwindling Jerome

    2010-04-01

    Full Text Available This course presents an overview of the concepts of the neural networks and their aplication in the framework of High energy physics analyses. After a brief introduction on the concept of neural networks, the concept is explained in the frame of neuro-biology, introducing the concept of multi-layer perceptron, learning and their use as data classifer. The concept is then presented in a second part using in more details the mathematical approach focussing on typical use cases faced in particle physics. Finally, the last part presents the best way to use such statistical tools in view of event classifers, putting the emphasis on the setup of the multi-layer perceptron. The full article (15 p. corresponding to this lecture is written in french and is provided in the proceedings of the book SOS 2008.

  15. Natural lecithin promotes neural network complexity and activity.

    Science.gov (United States)

    Latifi, Shahrzad; Tamayol, Ali; Habibey, Rouhollah; Sabzevari, Reza; Kahn, Cyril; Geny, David; Eftekharpour, Eftekhar; Annabi, Nasim; Blau, Axel; Linder, Michel; Arab-Tehrany, Elmira

    2016-05-27

    Phospholipids in the brain cell membranes contain different polyunsaturated fatty acids (PUFAs), which are critical to nervous system function and structure. In particular, brain function critically depends on the uptake of the so-called "essential" fatty acids such as omega-3 (n-3) and omega-6 (n-6) PUFAs that cannot be readily synthesized by the human body. We extracted natural lecithin rich in various PUFAs from a marine source and transformed it into nanoliposomes. These nanoliposomes increased neurite outgrowth, network complexity and neural activity of cortical rat neurons in vitro. We also observed an upregulation of synapsin I (SYN1), which supports the positive role of lecithin in synaptogenesis, synaptic development and maturation. These findings suggest that lecithin nanoliposomes enhance neuronal development, which may have an impact on devising new lecithin delivery strategies for therapeutic applications.

  16. Neural Network Hydrological Modelling: Linear Output Activation Functions?

    Science.gov (United States)

    Abrahart, R. J.; Dawson, C. W.

    2005-12-01

    The power to represent non-linear hydrological processes is of paramount importance in neural network hydrological modelling operations. The accepted wisdom requires non-polynomial activation functions to be incorporated in the hidden units such that a single tier of hidden units can thereafter be used to provide a 'universal approximation' to whatever particular hydrological mechanism or function is of interest to the modeller. The user can select from a set of default activation functions, or in certain software packages, is able to define their own function - the most popular options being logistic, sigmoid and hyperbolic tangent. If a unit does not transform its inputs it is said to possess a 'linear activation function' and a combination of linear activation functions will produce a linear solution; whereas the use of non-linear activation functions will produce non-linear solutions in which the principle of superposition does not hold. For hidden units, speed of learning and network complexities are important issues. For the output units, it is desirable to select an activation function that is suited to the distribution of the target values: e.g. binary targets (logistic); categorical targets (softmax); continuous-valued targets with a bounded range (logistic / tanh); positive target values with no known upper bound (exponential; but beware of overflow); continuous-valued targets with no known bounds (linear). It is also standard practice in most hydrological applications to use the default software settings and to insert a set of identical non-linear activation functions in the hidden layer and output layer processing units. Mixed combinations have nevertheless been reported in several hydrological modelling papers and the full ramifications of such activities requires further investigation and assessment i.e. non-linear activation functions in the hidden units connected to linear or clipped-linear activation functions in the output unit. There are two

  17. Voltage Estimation in Active Distribution Grids Using Neural Networks

    DEFF Research Database (Denmark)

    Pertl, Michael; Heussen, Kai; Gehrke, Oliver

    2016-01-01

    the observability of distribution systems has to be improved. To increase the situational awareness of the power system operator data driven methods can be employed. These methods benefit from newly available data sources such as smart meters. This paper presents a voltage estimation method based on neural networks...

  18. Critical Branching Neural Networks

    Science.gov (United States)

    Kello, Christopher T.

    2013-01-01

    It is now well-established that intrinsic variations in human neural and behavioral activity tend to exhibit scaling laws in their fluctuations and distributions. The meaning of these scaling laws is an ongoing matter of debate between isolable causes versus pervasive causes. A spiking neural network model is presented that self-tunes to critical…

  19. Sensory-related neural activity regulates the structure of vascular networks in the cerebral cortex

    Science.gov (United States)

    Lacoste, Baptiste; Comin, Cesar H.; Ben-Zvi, Ayal; Kaeser, Pascal S.; Xu, Xiaoyin; Costa, Luciano da F.; Gu, Chenghua

    2014-01-01

    SUMMARY Neurovascular interactions are essential for proper brain function. While the effect of neural activity on cerebral blood flow has been extensively studied, whether neural activity influences vascular patterning remains elusive. Here, we demonstrate that neural activity promotes the formation of vascular networks in the early postnatal mouse barrel cortex. Using a combination of genetics, imaging, and computational tools to allow simultaneous analysis of neuronal and vascular components, we found that vascular density and branching were decreased in the barrel cortex when sensory input was reduced by either a complete deafferentation, a genetic impairment of neurotransmitter release at thalamocortical synapses, or a selective reduction of sensory-related neural activity by whisker plucking. In contrast, enhancement of neural activity by whisker stimulation led to an increase in vascular density and branching. The finding that neural activity is necessary and sufficient to trigger alterations of vascular networks reveals a novel feature of neurovascular interactions. PMID:25155955

  20. Sensory-related neural activity regulates the structure of vascular networks in the cerebral cortex.

    Science.gov (United States)

    Lacoste, Baptiste; Comin, Cesar H; Ben-Zvi, Ayal; Kaeser, Pascal S; Xu, Xiaoyin; Costa, Luciano da F; Gu, Chenghua

    2014-09-03

    Neurovascular interactions are essential for proper brain function. While the effect of neural activity on cerebral blood flow has been extensively studied, whether or not neural activity influences vascular patterning remains elusive. Here, we demonstrate that neural activity promotes the formation of vascular networks in the early postnatal mouse barrel cortex. Using a combination of genetics, imaging, and computational tools to allow simultaneous analysis of neuronal and vascular components, we found that vascular density and branching were decreased in the barrel cortex when sensory input was reduced by either a complete deafferentation, a genetic impairment of neurotransmitter release at thalamocortical synapses, or a selective reduction of sensory-related neural activity by whisker plucking. In contrast, enhancement of neural activity by whisker stimulation led to an increase in vascular density and branching. The finding that neural activity is necessary and sufficient to trigger alterations of vascular networks reveals an important feature of neurovascular interactions. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Introduction to neural networks

    CERN Document Server

    James, Frederick E

    1994-02-02

    1. Introduction and overview of Artificial Neural Networks. 2,3. The Feed-forward Network as an inverse Problem, and results on the computational complexity of network training. 4.Physics applications of neural networks.

  2. Active Vibration Control of the Smart Plate Using Artificial Neural Network Controller

    Directory of Open Access Journals (Sweden)

    Mohit

    2015-01-01

    Full Text Available The active vibration control (AVC of a rectangular plate with single input and single output approach is investigated using artificial neural network. The cantilever plate of finite length, breadth, and thickness having piezoelectric patches as sensors/actuators fixed at the upper and lower surface of the metal plate is considered for examination. The finite element model of the cantilever plate is utilized to formulate the whole strategy. The compact RIO and MATLAB simulation software are exercised to get the appropriate results. The cantilever plate is subjected to impulse input and uniform white noise disturbance. The neural network is trained offline and tuned with LQR controller. The various training algorithms to tune the neural network are exercised. The best efficient algorithm is finally considered to tune the neural network controller designed for active vibration control of the smart plate.

  3. Morphological neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Ritter, G.X.; Sussner, P. [Univ. of Florida, Gainesville, FL (United States)

    1996-12-31

    The theory of artificial neural networks has been successfully applied to a wide variety of pattern recognition problems. In this theory, the first step in computing the next state of a neuron or in performing the next layer neural network computation involves the linear operation of multiplying neural values by their synaptic strengths and adding the results. Thresholding usually follows the linear operation in order to provide for nonlinearity of the network. In this paper we introduce a novel class of neural networks, called morphological neural networks, in which the operations of multiplication and addition are replaced by addition and maximum (or minimum), respectively. By taking the maximum (or minimum) of sums instead of the sum of products, morphological network computation is nonlinear before thresholding. As a consequence, the properties of morphological neural networks are drastically different than those of traditional neural network models. In this paper we consider some of these differences and provide some particular examples of morphological neural network.

  4. Model Integrating Fuzzy Argument with Neural Network Enhancing the Performance of Active Queue Management

    Directory of Open Access Journals (Sweden)

    Nguyen Kim Quoc

    2015-08-01

    Full Text Available The bottleneck control by active queue management mechanisms at network nodes is essential. In recent years, some researchers have used fuzzy argument to improve the active queue management mechanisms to enhance the network performance. However, the projects using the fuzzy controller depend heavily on professionals and their parameters cannot be updated according to changes in the network, so the effectiveness of this mechanism is not high. Therefore, we propose a model combining the fuzzy controller with neural network (FNN to overcome the limitations above. Results of the training of the neural networks will find the optimal parameters for the adaptive fuzzy controller well to changes of the network. This improves the operational efficiency of the active queue management mechanisms at network nodes.

  5. Adaptive RBF Neural Network Control for Three-Phase Active Power Filter

    Directory of Open Access Journals (Sweden)

    Juntao Fei

    2013-05-01

    Full Text Available Abstract An adaptive radial basis function (RBF neural network control system for three-phase active power filter (APF is proposed to eliminate harmonics. Compensation current is generated to track command current so as to eliminate the harmonic current of non-linear load and improve the quality of the power system. The asymptotical stability of the APF system can be guaranteed with the proposed adaptive neural network strategy. The parameters of the neural network can be adaptively updated to achieve the desired tracking task. The simulation results demonstrate good performance, for example showing small current tracking error, reduced total harmonic distortion (THD, improved accuracy and strong robustness in the presence of parameters variation and nonlinear load. It is shown that the adaptive RBF neural network control system for three-phase APF gives better control than hysteresis control.

  6. Neural oscillations: beta band activity across motor networks.

    Science.gov (United States)

    Khanna, Preeya; Carmena, Jose M

    2015-06-01

    Local field potential (LFP) activity in motor cortical and basal ganglia regions exhibits prominent beta (15-40Hz) oscillations during reaching and grasping, muscular contraction, and attention tasks. While in vitro and computational work has revealed specific mechanisms that may give rise to the frequency and duration of this oscillation, there is still controversy about what behavioral processes ultimately drive it. Here, simultaneous behavioral and large-scale neural recording experiments from non-human primate and human subjects are reviewed in the context of specific hypotheses about how beta band activity is generated. Finally, a new experimental paradigm utilizing operant conditioning combined with motor tasks is proposed as a way to further investigate this oscillation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. [Artificial neural networks in Neurosciences].

    Science.gov (United States)

    Porras Chavarino, Carmen; Salinas Martínez de Lecea, José María

    2011-11-01

    This article shows that artificial neural networks are used for confirming the relationships between physiological and cognitive changes. Specifically, we explore the influence of a decrease of neurotransmitters on the behaviour of old people in recognition tasks. This artificial neural network recognizes learned patterns. When we change the threshold of activation in some units, the artificial neural network simulates the experimental results of old people in recognition tasks. However, the main contributions of this paper are the design of an artificial neural network and its operation inspired by the nervous system and the way the inputs are coded and the process of orthogonalization of patterns.

  8. Active random noise control using adaptive learning rate neural networks with an immune feedback law

    Science.gov (United States)

    Sasaki, Minoru; Kuribayashi, Takumi; Ito, Satoshi

    2005-12-01

    In this paper an active random noise control using adaptive learning rate neural networks with an immune feedback law is presented. The adaptive learning rate strategy increases the learning rate by a small constant if the current partial derivative of the objective function with respect to the weight and the exponential average of the previous derivatives have the same sign, otherwise the learning rate is decreased by a proportion of its value. The use of an adaptive learning rate attempts to keep the learning step size as large as possible without leading to oscillation. In the proposed method, because of the immune feedback law change a learning rate of the neural networks individually and adaptively, it is expected that a cost function minimize rapidly and training time is decreased. Numerical simulations and experiments of active random noise control with the transfer function of the error path will be performed, to validate the convergence properties of the adaptive learning rate Neural Networks with the immune feedback law. Control results show that adaptive learning rate Neural Networks control structure can outperform linear controllers and conventional neural network controller for the active random noise control.

  9. Optogenetics in Silicon: A Neural Processor for Predicting Optically Active Neural Networks.

    Science.gov (United States)

    Junwen Luo; Nikolic, Konstantin; Evans, Benjamin D; Na Dong; Xiaohan Sun; Andras, Peter; Yakovlev, Alex; Degenaar, Patrick

    2017-02-01

    We present a reconfigurable neural processor for real-time simulation and prediction of opto-neural behaviour. We combined a detailed Hodgkin-Huxley CA3 neuron integrated with a four-state Channelrhodopsin-2 (ChR2) model into reconfigurable silicon hardware. Our architecture consists of a Field Programmable Gated Array (FPGA) with a custom-built computing data-path, a separate data management system and a memory approach based router. Advancements over previous work include the incorporation of short and long-term calcium and light-dependent ion channels in reconfigurable hardware. Also, the developed processor is computationally efficient, requiring only 0.03 ms processing time per sub-frame for a single neuron and 9.7 ms for a fully connected network of 500 neurons with a given FPGA frequency of 56.7 MHz. It can therefore be utilized for exploration of closed loop processing and tuning of biologically realistic optogenetic circuitry.

  10. Performance of Deep and Shallow Neural Networks, the Universal Approximation Theorem, Activity Cliffs, and QSAR.

    Science.gov (United States)

    Winkler, David A; Le, Tu C

    2017-01-01

    Neural networks have generated valuable Quantitative Structure-Activity/Property Relationships (QSAR/QSPR) models for a wide variety of small molecules and materials properties. They have grown in sophistication and many of their initial problems have been overcome by modern mathematical techniques. QSAR studies have almost always used so-called "shallow" neural networks in which there is a single hidden layer between the input and output layers. Recently, a new and potentially paradigm-shifting type of neural network based on Deep Learning has appeared. Deep learning methods have generated impressive improvements in image and voice recognition, and are now being applied to QSAR and QSAR modelling. This paper describes the differences in approach between deep and shallow neural networks, compares their abilities to predict the properties of test sets for 15 large drug data sets (the kaggle set), discusses the results in terms of the Universal Approximation theorem for neural networks, and describes how DNN may ameliorate or remove troublesome "activity cliffs" in QSAR data sets. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Dissipativity and Synchronization of Generalized BAM Neural Networks With Multivariate Discontinuous Activations.

    Science.gov (United States)

    Wang, Dongshu; Huang, Lihong; Tang, Longkun

    2017-09-14

    This paper is concerned with the dissipativity and synchronization problems of a class of delayed bidirectional associative memory (BAM) neural networks in which neuron activations are modeled by discontinuous bivariate functions. First, the concept of the Filippov solution is extended to functional differential equations with discontinuous right-hand sides and mixed delays via functional differential inclusions. The global dissipativity of the Filippov solution to the considered BAM neural networks is proven using generalized Halanay inequalities and matrix measure approaches. Second, to realize global exponential complete synchronization of BAM neural networks with multivariate discontinuous activations, discontinuous state feedback controllers are designed using functional differential inclusions theory and nonsmooth analysis theory with generalized Lyapunov functional method. Finally, several numerical examples are provided to demonstrate the applicability and effectiveness of our proposed results.

  12. Complete stability of delayed recurrent neural networks with Gaussian activation functions.

    Science.gov (United States)

    Liu, Peng; Zeng, Zhigang; Wang, Jun

    2017-01-01

    This paper addresses the complete stability of delayed recurrent neural networks with Gaussian activation functions. By means of the geometrical properties of Gaussian function and algebraic properties of nonsingular M-matrix, some sufficient conditions are obtained to ensure that for an n-neuron neural network, there are exactly 3(k) equilibrium points with 0≤k≤n, among which 2(k) and 3(k)-2(k) equilibrium points are locally exponentially stable and unstable, respectively. Moreover, it concludes that all the states converge to one of the equilibrium points; i.e., the neural networks are completely stable. The derived conditions herein can be easily tested. Finally, a numerical example is given to illustrate the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Navigation of autonomous mobile robot using different activation functions of wavelet neural network

    Directory of Open Access Journals (Sweden)

    Panigrahi Pratap Kumar

    2015-03-01

    Full Text Available An autonomous mobile robot is a robot which can move and act autonomously without the help of human assistance. Navigation problem of mobile robot in unknown environment is an interesting research area. This is a problem of deducing a path for the robot from its initial position to a given goal position without collision with the obstacles. Different methods such as fuzzy logic, neural networks etc. are used to find collision free path for mobile robot. This paper examines behavior of path planning of mobile robot using three activation functions of wavelet neural network i.e. Mexican Hat, Gaussian and Morlet wavelet functions by MATLAB. The simulation result shows that WNN has faster learning speed with respect to traditional artificial neural network.

  14. Identification of children's activity type with accelerometer-based neural networks

    NARCIS (Netherlands)

    Vries, S.I. de; Engels, M.; Garre, F.G.

    2011-01-01

    Purpose: The study's purpose was to identify children's physical activity type using artificial neural network (ANN) models based on uniaxial or triaxial accelerometer data from the hip or the ankle. Methods: Fifty-eight children (31 boys and 27 girls, age range = 9-12 yr) performed the following

  15. Evaluation of neural networks to identify types of activity using accelerometers

    NARCIS (Netherlands)

    Vries, S.I. de; Garre, F.G.; Engbers, L.H.; Hildebrandt, V.H.; Buuren, S. van

    2011-01-01

    Purpose: To develop and evaluate two artificial neural network (ANN) models based on single-sensor accelerometer data and an ANN model based on the data of two accelerometers for the identification of types of physical activity in adults. Methods: Forty-nine subjects (21 men and 28 women; age range

  16. An investigation of the relationship between activation of a social cognitive neural network and social functioning.

    Science.gov (United States)

    Pinkham, Amy E; Hopfinger, Joseph B; Ruparel, Kosha; Penn, David L

    2008-07-01

    Previous work examining the neurobiological substrates of social cognition in healthy individuals has reported modulation of a social cognitive network such that increased activation of the amygdala, fusiform gyrus, and superior temporal sulcus are evident when individuals judge a face to be untrustworthy as compared with trustworthy. We examined whether this pattern would be present in individuals with schizophrenia who are known to show reduced activation within these same neural regions when processing faces. Additionally, we sought to determine how modulation of this social cognitive network may relate to social functioning. Neural activation was measured using functional magnetic resonance imaging with blood oxygenation level dependent contrast in 3 groups of individuals--nonparanoid individuals with schizophrenia, paranoid individuals with schizophrenia, and healthy controls--while they rated faces as either trustworthy or untrustworthy. Analyses of mean percent signal change extracted from a priori regions of interest demonstrated that both controls and nonparanoid individuals with schizophrenia showed greater activation of this social cognitive network when they rated a face as untrustworthy relative to trustworthy. In contrast, paranoid individuals did not show a significant difference in levels of activation based on how they rated faces. Further, greater activation of this social cognitive network to untrustworthy faces was significantly and positively correlated with social functioning. These findings indicate that impaired modulation of neural activity while processing social stimuli may underlie deficits in social cognition and social dysfunction in schizophrenia.

  17. Research on image retrieval using deep convolutional neural network combining L1 regularization and PRelu activation function

    Science.gov (United States)

    QingJie, Wei; WenBin, Wang

    2017-06-01

    In this paper, the image retrieval using deep convolutional neural network combined with regularization and PRelu activation function is studied, and improves image retrieval accuracy. Deep convolutional neural network can not only simulate the process of human brain to receive and transmit information, but also contains a convolution operation, which is very suitable for processing images. Using deep convolutional neural network is better than direct extraction of image visual features for image retrieval. However, the structure of deep convolutional neural network is complex, and it is easy to over-fitting and reduces the accuracy of image retrieval. In this paper, we combine L1 regularization and PRelu activation function to construct a deep convolutional neural network to prevent over-fitting of the network and improve the accuracy of image retrieval

  18. Sustained activity in hierarchical modular neural networks: self-organized criticality and oscillations

    Directory of Open Access Journals (Sweden)

    Sheng-Jun Wang

    2011-06-01

    Full Text Available Cerebral cortical brain networks possess a number of conspicuous features of structure and dynamics. First, these networks have an intricate, non-random organization. They are structured in a hierarchical modular fashion, from large-scale regions of the whole brain, via cortical areas and area subcompartments organized as structural and functional maps to cortical columns, and finally circuits made up of individual neurons. Second, the networks display self-organized sustained activity, which is persistent in the absence of external stimuli. At the systems level, such activity is characterized by complex rhythmical oscillations over a broadband background, while at the cellular level, neuronal discharges have been observed to display avalanches, indicating that cortical networks are at the state of self-organized criticality. We explored the relationship between hierarchical neural network organization and sustained dynamics using large-scale network modeling. It was shown that sparse random networks with balanced excitation and inhibition can sustain neural activity without external stimulation. We find that a hierarchical modular architecture can generate sustained activity better than random networks. Moreover, the system can simultaneously support rhythmical oscillations and self-organized criticality, which are not present in the respective random networks. The underlying mechanism is that each dense module cannot sustain activity on its own, but displays self-organized criticality in the presence of weak perturbations. The hierarchical modular networks provide the coupling among subsystems with self-organized criticality. These results imply that the hierarchical modular architecture of cortical networks plays an important role in shaping the ongoing spontaneous activity of the brain, potentially allowing the system to take advantage of both the sensitivityof critical state and predictability and timing of oscillations for efficient

  19. Neural Networks: Implementations and Applications

    NARCIS (Netherlands)

    Vonk, E.; Veelenturf, L.P.J.; Jain, L.C.

    1996-01-01

    Artificial neural networks, also called neural networks, have been used successfully in many fields including engineering, science and business. This paper presents the implementation of several neural network simulators and their applications in character recognition and other engineering areas

  20. Deep Recurrent Neural Network for Mobile Human Activity Recognition with High Throughput

    OpenAIRE

    Inoue, Masaya; Inoue, Sozo; Nishida, Takeshi

    2016-01-01

    In this paper, we propose a method of human activity recognition with high throughput from raw accelerometer data applying a deep recurrent neural network (DRNN), and investigate various architectures and its combination to find the best parameter values. The "high throughput" refers to short time at a time of recognition. We investigated various parameters and architectures of the DRNN by using the training dataset of 432 trials with 6 activity classes from 7 people. The maximum recognition ...

  1. Emergence of gamma motor activity in an artificial neural network model of the corticospinal system.

    Science.gov (United States)

    Grandjean, Bernard; Maier, Marc A

    2017-02-01

    Muscle spindle discharge during active movement is a function of mechanical and neural parameters. Muscle length changes (and their derivatives) represent its primary mechanical, fusimotor drive its neural component. However, neither the action nor the function of fusimotor and in particular of γ-drive, have been clearly established, since γ-motor activity during voluntary, non-locomotor movements remains largely unknown. Here, using a computational approach, we explored whether γ-drive emerges in an artificial neural network model of the corticospinal system linked to a biomechanical antagonist wrist simulator. The wrist simulator included length-sensitive and γ-drive-dependent type Ia and type II muscle spindle activity. Network activity and connectivity were derived by a gradient descent algorithm to generate reciprocal, known target α-motor unit activity during wrist flexion-extension (F/E) movements. Two tasks were simulated: an alternating F/E task and a slow F/E tracking task. Emergence of γ-motor activity in the alternating F/E network was a function of α-motor unit drive: if muscle afferent (together with supraspinal) input was required for driving α-motor units, then γ-drive emerged in the form of α-γ coactivation, as predicted by empirical studies. In the slow F/E tracking network, γ-drive emerged in the form of α-γ dissociation and provided critical, bidirectional muscle afferent activity to the cortical network, containing known bidirectional target units. The model thus demonstrates the complementary aspects of spindle output and hence γ-drive: i) muscle spindle activity as a driving force of α-motor unit activity, and ii) afferent activity providing continuous sensory information, both of which crucially depend on γ-drive.

  2. Video-based convolutional neural networks for activity recognition from robot-centric videos

    Science.gov (United States)

    Ryoo, M. S.; Matthies, Larry

    2016-05-01

    In this evaluation paper, we discuss convolutional neural network (CNN)-based approaches for human activity recognition. In particular, we investigate CNN architectures designed to capture temporal information in videos and their applications to the human activity recognition problem. There have been multiple previous works to use CNN-features for videos. These include CNNs using 3-D XYT convolutional filters, CNNs using pooling operations on top of per-frame image-based CNN descriptors, and recurrent neural networks to learn temporal changes in per-frame CNN descriptors. We experimentally compare some of these different representatives CNNs while using first-person human activity videos. We especially focus on videos from a robots viewpoint, captured during its operations and human-robot interactions.

  3. Neural network based semi-active control strategy for structural vibration mitigation with magnetorheological damper

    DEFF Research Database (Denmark)

    Bhowmik, Subrata

    2011-01-01

    This paper presents a neural network based semi-active control method for a rotary type magnetorheological (MR) damper. The characteristics of the MR damper are described by the classic Bouc-Wen model, and the performance of the proposed control method is evaluated in terms of a base exited shear...... frame structure. As demonstrated in the literature effective damping of flexible structures is obtained by a suitable combination of pure friction and negative damper stiffness. This damper model is rate-independent and fully described by the desired shape of the hysteresis loops or force...... mode of the structure. The neural network control is then developed to reproduce the desired force based on damper displacement and velocity as network input, and it is therefore referred to as an amplitude dependent model reference control method. An inverse model of the MR damper is needed...

  4. Forecast and restoration of geomagnetic activity indices by using the software-computational neural network complex

    Science.gov (United States)

    Barkhatov, Nikolay; Revunov, Sergey

    2010-05-01

    It is known that currently used indices of geomagnetic activity to some extent reflect the physical processes occurring in the interaction of the perturbed solar wind with Earth's magnetosphere. Therefore, they are connected to each other and with the parameters of near-Earth space. The establishment of such nonlinear connections is interest. For such purposes when the physical problem is complex or has many parameters the technology of artificial neural networks is applied. Such approach for development of the automated forecast and restoration method of geomagnetic activity indices with the establishment of creative software-computational neural network complex is used. Each neural network experiments were carried out at this complex aims to search for a specific nonlinear relation between the analyzed indices and parameters. At the core of the algorithm work program a complex scheme of the functioning of artificial neural networks (ANN) of different types is contained: back propagation Elman network, feed forward network, fuzzy logic network and Kohonen layer classification network. Tools of the main window of the complex (the application) the settings used by neural networks allow you to change: the number of hidden layers, the number of neurons in the layer, the input and target data, the number of cycles of training. Process and the quality of training the ANN is a dynamic plot of changing training error. Plot of comparison of network response with the test sequence is result of the network training. The last-trained neural network with established nonlinear connection for repeated numerical experiments can be run. At the same time additional training is not executed and the previously trained network as a filter input parameters get through and output parameters with the test event are compared. At statement of the large number of different experiments provided the ability to run the program in a "batch" mode is stipulated. For this purpose the user a

  5. Hidden neural networks

    DEFF Research Database (Denmark)

    Krogh, Anders Stærmose; Riis, Søren Kamaric

    1999-01-01

    A general framework for hybrids of hidden Markov models (HMMs) and neural networks (NNs) called hidden neural networks (HNNs) is described. The article begins by reviewing standard HMMs and estimation by conditional maximum likelihood, which is used by the HNN. In the HNN, the usual HMM probability...... parameters are replaced by the outputs of state-specific neural networks. As opposed to many other hybrids, the HNN is normalized globally and therefore has a valid probabilistic interpretation. All parameters in the HNN are estimated simultaneously according to the discriminative conditional maximum...... likelihood criterion. The HNN can be viewed as an undirected probabilistic independence network (a graphical model), where the neural networks provide a compact representation of the clique functions. An evaluation of the HNN on the task of recognizing broad phoneme classes in the TIMIT database shows clear...

  6. Fundamental Active Current Adaptive Linear Neural Networks for Photovoltaic Shunt Active Power Filters

    Directory of Open Access Journals (Sweden)

    Muhammad Ammirrul Atiqi Mohd Zainuri

    2016-05-01

    Full Text Available This paper presents improvement of a harmonics extraction algorithm, known as the fundamental active current (FAC adaptive linear element (ADALINE neural network with the integration of photovoltaic (PV to shunt active power filters (SAPFs as active current source. Active PV injection in SAPFs should reduce dependency on grid supply current to supply the system. In addition, with a better and faster harmonics extraction algorithm, the SAPF should perform well, especially under dynamic PV and load conditions. The role of the actual injection current from SAPF after connecting PVs will be evaluated, and the better effect of using FAC ADALINE will be confirmed. The proposed SAPF was simulated and evaluated in MATLAB/Simulink first. Then, an experimental laboratory prototype was also developed to be tested with a PV simulator (CHROMA 62100H-600S, and the algorithm was implemented using a TMS320F28335 Digital Signal Processor (DSP. From simulation and experimental results, significant improvements in terms of total harmonic distortion (THD, time response and reduction of source power from grid have successfully been verified and achieved.

  7. Neural Network Ensembles

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Salamon, Peter

    1990-01-01

    We propose several means for improving the performance an training of neural networks for classification. We use crossvalidation as a tool for optimizing network parameters and architecture. We show further that the remaining generalization error can be reduced by invoking ensembles of similar...... networks....

  8. Alternative Sensor System and MLP Neural Network for Vehicle Pedal Activity Estimation

    Directory of Open Access Journals (Sweden)

    Ahmed M. Wefky

    2010-04-01

    Full Text Available It is accepted that the activity of the vehicle pedals (i.e., throttle, brake, clutch reflects the driver’s behavior, which is at least partially related to the fuel consumption and vehicle pollutant emissions. This paper presents a solution to estimate the driver activity regardless of the type, model, and year of fabrication of the vehicle. The solution is based on an alternative sensor system (regime engine, vehicle speed, frontal inclination and linear acceleration that reflects the activity of the pedals in an indirect way, to estimate that activity by means of a multilayer perceptron neural network with a single hidden layer.

  9. Optimal Recognition Method of Human Activities Using Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Oniga Stefan

    2015-12-01

    Full Text Available The aim of this research is an exhaustive analysis of the various factors that may influence the recognition rate of the human activity using wearable sensors data. We made a total of 1674 simulations on a publically released human activity database by a group of researcher from the University of California at Berkeley. In a previous research, we analyzed the influence of the number of sensors and their placement. In the present research we have examined the influence of the number of sensor nodes, the type of sensor node, preprocessing algorithms, type of classifier and its parameters. The final purpose is to find the optimal setup for best recognition rates with lowest hardware and software costs.

  10. Analysis of neural networks

    CERN Document Server

    Heiden, Uwe

    1980-01-01

    The purpose of this work is a unified and general treatment of activity in neural networks from a mathematical pOint of view. Possible applications of the theory presented are indica­ ted throughout the text. However, they are not explored in de­ tail for two reasons : first, the universal character of n- ral activity in nearly all animals requires some type of a general approach~ secondly, the mathematical perspicuity would suffer if too many experimental details and empirical peculiarities were interspersed among the mathematical investigation. A guide to many applications is supplied by the references concerning a variety of specific issues. Of course the theory does not aim at covering all individual problems. Moreover there are other approaches to neural network theory (see e.g. Poggio-Torre, 1978) based on the different lev­ els at which the nervous system may be viewed. The theory is a deterministic one reflecting the average be­ havior of neurons or neuron pools. In this respect the essay is writt...

  11. The effects of dynamical synapses on firing rate activity: a spiking neural network model.

    Science.gov (United States)

    Khalil, Radwa; Moftah, Marie Z; Moustafa, Ahmed A

    2017-11-01

    Accumulating evidence relates the fine-tuning of synaptic maturation and regulation of neural network activity to several key factors, including GABA A signaling and a lateral spread length between neighboring neurons (i.e., local connectivity). Furthermore, a number of studies consider short-term synaptic plasticity (STP) as an essential element in the instant modification of synaptic efficacy in the neuronal network and in modulating responses to sustained ranges of external Poisson input frequency (IF). Nevertheless, evaluating the firing activity in response to the dynamical interaction between STP (triggered by ranges of IF) and these key parameters in vitro remains elusive. Therefore, we designed a spiking neural network (SNN) model in which we incorporated the following parameters: local density of arbor essences and a lateral spread length between neighboring neurons. We also created several network scenarios based on these key parameters. Then, we implemented two classes of STP: (1) short-term synaptic depression (STD) and (2) short-term synaptic facilitation (STF). Each class has two differential forms based on the parametric value of its synaptic time constant (either for depressing or facilitating synapses). Lastly, we compared the neural firing responses before and after the treatment with STP. We found that dynamical synapses (STP) have a critical differential role on evaluating and modulating the firing rate activity in each network scenario. Moreover, we investigated the impact of changing the balance between excitation (E) and inhibition (I) on stabilizing this firing activity. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  12. Neural network applications

    Science.gov (United States)

    Padgett, Mary L.; Desai, Utpal; Roppel, T.A.; White, Charles R.

    1993-01-01

    A design procedure is suggested for neural networks which accommodates the inclusion of such knowledge-based systems techniques as fuzzy logic and pairwise comparisons. The use of these procedures in the design of applications combines qualitative and quantitative factors with empirical data to yield a model with justifiable design and parameter selection procedures. The procedure is especially relevant to areas of back-propagation neural network design which are highly responsive to the use of precisely recorded expert knowledge.

  13. Adaptive neural networks control for camera stabilization with active suspension system

    Directory of Open Access Journals (Sweden)

    Feng Zhao

    2015-08-01

    Full Text Available The camera always suffers from image instability on the moving vehicle due to unintentional vibrations caused by road roughness. This article presents an adaptive neural network approach mixed with linear quadratic regulator control for a quarter-car active suspension system to stabilize the image captured area of the camera. An active suspension system provides extra force through the actuator which allows it to suppress vertical vibration of sprung mass. First, to deal with the road disturbance and the system uncertainties, radial basis function neural network is proposed to construct the map between the state error and the compensation component, which can correct the optimal state-feedback control law. The weights matrix of radial basis function neural network is adaptively tuned online. Then, the closed-loop stability and asymptotic convergence performance is guaranteed by Lyapunov analysis. Finally, the simulation results demonstrate that the proposed controller effectively suppresses the vibration of the camera and enhances the stabilization of the entire camera, where different excitations are considered to validate the system performance.

  14. Wrestling model of the repertoire of activity propagation modes in quadruple neural networks.

    Science.gov (United States)

    Shteingart, Hanan; Raichman, Nadav; Baruchi, Itay; Ben-Jacob, Eshel

    2010-01-01

    The spontaneous activity of engineered quadruple cultured neural networks (of four-coupled sub-networks) exhibits a repertoire of different types of mutual synchronization events. Each event corresponds to a specific activity propagation mode (APM) defined by the order of activity propagation between the sub-networks. We statistically characterized the frequency of spontaneous appearance of the different types of APMs. The relative frequencies of the APMs were then examined for their power-law properties. We found that the frequencies of appearance of the leading (most frequent) APMs have close to constant algebraic ratio reminiscent of Zipf's scaling of words. We show that the observations are consistent with a simplified "wrestling" model. This model represents an extension of the "boxing arena" model which was previously proposed to describe the ratio between the two activity modes in two coupled sub-networks. The additional new element in the "wrestling" model presented here is that the firing within each network is modeled by a time interval generator with similar intra-network Lévy distribution. We modeled the different burst-initiation zones' interaction by competition between the stochastic generators with Gaussian inter-network variability. Estimation of the model parameters revealed similarity across different cultures while the inter-burst-interval of the cultures was similar across different APMs as numerical simulation of the model predicts.

  15. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition

    Science.gov (United States)

    Ordóñez, Francisco Javier; Roggen, Daniel

    2016-01-01

    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation. PMID:26797612

  16. Using convolutional neural networks for human activity classification on micro-Doppler radar spectrograms

    Science.gov (United States)

    Jordan, Tyler S.

    2016-05-01

    This paper presents the findings of using convolutional neural networks (CNNs) to classify human activity from micro-Doppler features. An emphasis on activities involving potential security threats such as holding a gun are explored. An automotive 24 GHz radar on chip was used to collect the data and a CNN (normally applied to image classification) was trained on the resulting spectrograms. The CNN achieves an error rate of 1.65 % on classifying running vs. walking, 17.3 % error on armed walking vs. unarmed walking, and 22 % on classifying six different actions.

  17. Prediction of enzyme activity with neural network models based on electronic and geometrical features of substrates.

    Science.gov (United States)

    Szaleniec, Maciej

    2012-01-01

    Artificial Neural Networks (ANNs) are introduced as robust and versatile tools in quantitative structure-activity relationship (QSAR) modeling. Their application to the modeling of enzyme reactivity is discussed, along with methodological issues. Methods of input variable selection, optimization of network internal structure, data set division and model validation are discussed. The application of ANNs in the modeling of enzyme activity over the last 20 years is briefly recounted. The discussed methodology is exemplified by the case of ethylbenzene dehydrogenase (EBDH). Intelligent Problem Solver and genetic algorithms are applied for input vector selection, whereas k-means clustering is used to partition the data into training and test cases. The obtained models exhibit high correlation between the predicted and experimental values (R(2) > 0.9). Sensitivity analyses and study of the response curves are used as tools for the physicochemical interpretation of the models in terms of the EBDH reaction mechanism. Neural networks are shown to be a versatile tool for the construction of robust QSAR models that can be applied to a range of aspects important in drug design and the prediction of biological activity.

  18. Implications of the Dependence of Neuronal Activity on Neural Network States for the Design of Brain-Machine Interfaces.

    Science.gov (United States)

    Panzeri, Stefano; Safaai, Houman; De Feo, Vito; Vato, Alessandro

    2016-01-01

    Brain-machine interfaces (BMIs) can improve the quality of life of patients with sensory and motor disabilities by both decoding motor intentions expressed by neural activity, and by encoding artificially sensed information into patterns of neural activity elicited by causal interventions on the neural tissue. Yet, current BMIs can exchange relatively small amounts of information with the brain. This problem has proved difficult to overcome by simply increasing the number of recording or stimulating electrodes, because trial-to-trial variability of neural activity partly arises from intrinsic factors (collectively known as the network state) that include ongoing spontaneous activity and neuromodulation, and so is shared among neurons. Here we review recent progress in characterizing the state dependence of neural responses, and in particular of how neural responses depend on endogenous slow fluctuations of network excitability. We then elaborate on how this knowledge may be used to increase the amount of information that BMIs exchange with brain. Knowledge of network state can be used to fine-tune the stimulation pattern that should reliably elicit a target neural response used to encode information in the brain, and to discount part of the trial-by-trial variability of neural responses, so that they can be decoded more accurately.

  19. Implications of the dependence of neuronal activity on neural network states for the design of brain-machine interfaces

    Directory of Open Access Journals (Sweden)

    Stefano ePanzeri

    2016-04-01

    Full Text Available Brain-machine interfaces (BMIs can improve the quality of life of patients with sensory and motor disabilities by both decoding motor intentions expressed by neural activity, and by encoding artificially sensed information into patterns of neural activity elicited by causal interventions on the neural tissue. Yet, current BMIs can exchange relatively small amounts of information with the brain. This problem has proved difficult to overcome by simply increasing the number of recording or stimulating electrodes, because trial-to-trial variability of neural activity partly arises from intrinsic factors (collectively known as the network state that include ongoing spontaneous activity and neuromodulation, and so is shared among neurons. Here we review recent progress in characterizing the state dependence of neural responses, and in particular of how neural responses depend on endogenous slow fluctuations of network excitability. We then elaborate on how this knowledge may be used to increase the amount of information that BMIs exchange with brains. Knowledge of network state can be used to fine-tune the stimulation pattern that should reliably elicit a target neural response used to encode information in the brain, and to discount part of the trial-by-trial variability of neural responses, so that they can be decoded more accurately.

  20. Hyperbolic Hopfield neural networks.

    Science.gov (United States)

    Kobayashi, M

    2013-02-01

    In recent years, several neural networks using Clifford algebra have been studied. Clifford algebra is also called geometric algebra. Complex-valued Hopfield neural networks (CHNNs) are the most popular neural networks using Clifford algebra. The aim of this brief is to construct hyperbolic HNNs (HHNNs) as an analog of CHNNs. Hyperbolic algebra is a Clifford algebra based on Lorentzian geometry. In this brief, a hyperbolic neuron is defined in a manner analogous to a phasor neuron, which is a typical complex-valued neuron model. HHNNs share common concepts with CHNNs, such as the angle and energy. However, HHNNs and CHNNs are different in several aspects. The states of hyperbolic neurons do not form a circle, and, therefore, the start and end states are not identical. In the quantized version, unlike complex-valued neurons, hyperbolic neurons have an infinite number of states.

  1. Application of an artificial neural network for evaluation of activity concentration exemption limits in NORM industry.

    Science.gov (United States)

    Wiedner, Hannah; Peyrés, Virginia; Crespo, Teresa; Mejuto, Marcos; García-Toraño, Eduardo; Maringer, Franz Josef

    2017-08-01

    NORM emits many different gamma energies that have to be analysed by an expert. Alternatively, artificial neural networks (ANNs) can be used. These mathematical software tools can generalize "knowledge" gained from training datasets, applying it to new problems. No expert knowledge of gamma-ray spectrometry is needed by the end-user. In this work an ANN was created that is able to decide from the raw gamma-ray spectrum if the activity concentrations in a sample are above or below the exemption limits. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Introduction to Artificial Neural Networks

    DEFF Research Database (Denmark)

    Larsen, Jan

    1999-01-01

    The note addresses introduction to signal analysis and classification based on artificial feed-forward neural networks.......The note addresses introduction to signal analysis and classification based on artificial feed-forward neural networks....

  3. Deconvolution using a neural network

    Energy Technology Data Exchange (ETDEWEB)

    Lehman, S.K.

    1990-11-15

    Viewing one dimensional deconvolution as a matrix inversion problem, we compare a neural network backpropagation matrix inverse with LMS, and pseudo-inverse. This is a largely an exercise in understanding how our neural network code works. 1 ref.

  4. Mdm2 mediates FMRP- and Gp1 mGluR-dependent protein translation and neural network activity.

    Science.gov (United States)

    Liu, Dai-Chi; Seimetz, Joseph; Lee, Kwan Young; Kalsotra, Auinash; Chung, Hee Jung; Lu, Hua; Tsai, Nien-Pei

    2017-10-15

    Activating Group 1 (Gp1) metabotropic glutamate receptors (mGluRs), including mGluR1 and mGluR5, elicits translation-dependent neural plasticity mechanisms that are crucial to animal behavior and circuit development. Dysregulated Gp1 mGluR signaling has been observed in numerous neurological and psychiatric disorders. However, the molecular pathways underlying Gp1 mGluR-dependent plasticity mechanisms are complex and have been elusive. In this study, we identified a novel mechanism through which Gp1 mGluR mediates protein translation and neural plasticity. Using a multi-electrode array (MEA) recording system, we showed that activating Gp1 mGluR elevates neural network activity, as demonstrated by increased spontaneous spike frequency and burst activity. Importantly, we validated that elevating neural network activity requires protein translation and is dependent on fragile X mental retardation protein (FMRP), the protein that is deficient in the most common inherited form of mental retardation and autism, fragile X syndrome (FXS). In an effort to determine the mechanism by which FMRP mediates protein translation and neural network activity, we demonstrated that a ubiquitin E3 ligase, murine double minute-2 (Mdm2), is required for Gp1 mGluR-induced translation and neural network activity. Our data showed that Mdm2 acts as a translation suppressor, and FMRP is required for its ubiquitination and down-regulation upon Gp1 mGluR activation. These data revealed a novel mechanism by which Gp1 mGluR and FMRP mediate protein translation and neural network activity, potentially through de-repressing Mdm2. Our results also introduce an alternative way for understanding altered protein translation and brain circuit excitability associated with Gp1 mGluR in neurological diseases such as FXS. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  5. Artificial neural network modelling

    CERN Document Server

    Samarasinghe, Sandhya

    2016-01-01

    This book covers theoretical aspects as well as recent innovative applications of Artificial Neural networks (ANNs) in natural, environmental, biological, social, industrial and automated systems. It presents recent results of ANNs in modelling small, large and complex systems under three categories, namely, 1) Networks, Structure Optimisation, Robustness and Stochasticity 2) Advances in Modelling Biological and Environmental Systems and 3) Advances in Modelling Social and Economic Systems. The book aims at serving undergraduates, postgraduates and researchers in ANN computational modelling. .

  6. Rules of engagement: factors that regulate activity-dependent synaptic plasticity during neural network development.

    Science.gov (United States)

    Stoneham, Emily T; Sanders, Erin M; Sanyal, Mohima; Dumas, Theodore C

    2010-10-01

    Overproduction and pruning during development is a phenomenon that can be observed in the number of organisms in a population, the number of cells in many tissue types, and even the number of synapses on individual neurons. The sculpting of synaptic connections in the brain of a developing organism is guided by its personal experience, which on a neural level translates to specific patterns of activity. Activity-dependent plasticity at glutamatergic synapses is an integral part of neuronal network formation and maturation in developing vertebrate and invertebrate brains. As development of the rodent forebrain transitions away from an over-proliferative state, synaptic plasticity undergoes modification. Late developmental changes in synaptic plasticity signal the establishment of a more stable network and relate to pronounced perceptual and cognitive abilities. In large part, activation of glutamate-sensitive N-methyl-d-aspartate (NMDA) receptors regulates synaptic stabilization during development and is a necessary step in memory formation processes that occur in the forebrain. A developmental change in the subunits that compose NMDA receptors coincides with developmental modifications in synaptic plasticity and cognition, and thus much research in this area focuses on NMDA receptor composition. We propose that there are additional, equally important developmental processes that influence synaptic plasticity, including mechanisms that are upstream (factors that influence NMDA receptors) and downstream (intracellular processes regulated by NMDA receptors) from NMDA receptor activation. The goal of this review is to summarize what is known and what is not well understood about developmental changes in functional plasticity at glutamatergic synapses, and in the end, attempt to relate these changes to maturation of neural networks.

  7. Neural network technologies

    Science.gov (United States)

    Villarreal, James A.

    1991-01-01

    A whole new arena of computer technologies is now beginning to form. Still in its infancy, neural network technology is a biologically inspired methodology which draws on nature's own cognitive processes. The Software Technology Branch has provided a software tool, Neural Execution and Training System (NETS), to industry, government, and academia to facilitate and expedite the use of this technology. NETS is written in the C programming language and can be executed on a variety of machines. Once a network has been debugged, NETS can produce a C source code which implements the network. This code can then be incorporated into other software systems. Described here are various software projects currently under development with NETS and the anticipated future enhancements to NETS and the technology.

  8. Phase-dependent stimulation effects on bursting activity in a neural network cortical simulation.

    Science.gov (United States)

    Anderson, William S; Kudela, Pawel; Weinberg, Seth; Bergey, Gregory K; Franaszczuk, Piotr J

    2009-03-01

    A neural network simulation with realistic cortical architecture has been used to study synchronized bursting as a seizure representation. This model has the property that bursting epochs arise and cease spontaneously, and bursting epochs can be induced by external stimulation. We have used this simulation to study the time-frequency properties of the evolving bursting activity, as well as effects due to network stimulation. The model represents a cortical region of 1.6 mm x 1.6mm, and includes seven neuron classes organized by cortical layer, inhibitory or excitatory properties, and electrophysiological characteristics. There are a total of 65,536 modeled single compartment neurons that operate according to a version of Hodgkin-Huxley dynamics. The intercellular wiring is based on histological studies and our previous modeling efforts. The bursting phase is characterized by a flat frequency spectrum. Stimulation pulses are applied to this modeled network, with an electric field provided by a 1mm radius circular electrode represented mathematically in the simulation. A phase dependence to the post-stimulation quiescence is demonstrated, with local relative maxima in efficacy occurring before or during the network depolarization phase in the underlying activity. Brief periods of network insensitivity to stimulation are also demonstrated. The phase dependence was irregular and did not reach statistical significance when averaged over the full 2.5s of simulated bursting investigated. This result provides comparison with previous in vivo studies which have also demonstrated increased efficacy of stimulation when pulses are applied at the peak of the local field potential during cortical after discharges. The network bursting is synchronous when comparing the different neuron classes represented up to an uncertainty of 10 ms. Studies performed with an excitatory chandelier cell component demonstrated increased synchronous bursting in the model, as predicted from

  9. Detection of micro solder balls using active thermography and probabilistic neural network

    Science.gov (United States)

    He, Zhenzhi; Wei, Li; Shao, Minghui; Lu, Xingning

    2017-03-01

    Micro solder ball/bump has been widely used in electronic packaging. It has been challenging to inspect these structures as the solder balls/bumps are often embedded between the component and substrates, especially in flip-chip packaging. In this paper, a detection method for micro solder ball/bump based on the active thermography and the probabilistic neural network is investigated. A VH680 infrared imager is used to capture the thermal image of the test vehicle, SFA10 packages. The temperature curves are processed using moving average technique to remove the peak noise. And the principal component analysis (PCA) is adopted to reconstruct the thermal images. The missed solder balls can be recognized explicitly in the second principal component image. Probabilistic neural network (PNN) is then established to identify the defective bump intelligently. The hot spots corresponding to the solder balls are segmented from the PCA reconstructed image, and statistic parameters are calculated. To characterize the thermal properties of solder bump quantitatively, three representative features are selected and used as the input vector in PNN clustering. The results show that the actual outputs and the expected outputs are consistent in identification of the missed solder balls, and all the bumps were recognized accurately, which demonstrates the viability of the PNN in effective defect inspection in high-density microelectronic packaging.

  10. Robust fixed-time synchronization for uncertain complex-valued neural networks with discontinuous activation functions.

    Science.gov (United States)

    Ding, Xiaoshuai; Cao, Jinde; Alsaedi, Ahmed; Alsaadi, Fuad E; Hayat, Tasawar

    2017-06-01

    This paper is concerned with the fixed-time synchronization for a class of complex-valued neural networks in the presence of discontinuous activation functions and parameter uncertainties. Fixed-time synchronization not only claims that the considered master-slave system realizes synchronization within a finite time segment, but also requires a uniform upper bound for such time intervals for all initial synchronization errors. To accomplish the target of fixed-time synchronization, a novel feedback control procedure is designed for the slave neural networks. By means of the Filippov discontinuity theories and Lyapunov stability theories, some sufficient conditions are established for the selection of control parameters to guarantee synchronization within a fixed time, while an upper bound of the settling time is acquired as well, which allows to be modulated to predefined values independently on initial conditions. Additionally, criteria of modified controller for assurance of fixed-time anti-synchronization are also derived for the same system. An example is included to illustrate the proposed methodologies. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Integration and transmission of distributed deterministic neural activity in feed-forward networks.

    Science.gov (United States)

    Asai, Yoshiyuki; Villa, Alessandro E P

    2012-01-24

    A ten layer feed-forward network characterized by diverging/converging patterns of projection between successive layers of regular spiking (RS) neurons is activated by an external spatiotemporal input pattern fed to Layer 1 in presence of stochastic background activities fed to all layers. We used three dynamical systems to derive the external input spike trains including the temporal information, and three types of neuron models for the network, i.e. either a network formed either by neurons modeled by exponential integrate-and-fire dynamics (RS-EIF, Fourcaud-Trocmé et al., 2003), or by simple spiking neurons (RS-IZH, Izhikevich, 2004) or by multiple-timescale adaptive threshold neurons (RS-MAT, Kobayashi et al., 2009), given five intensities for the background activity. The assessment of the temporal structure embedded in the output spike trains was carried out by detecting the preferred firing sequences for the reconstruction of de-noised spike trains (Asai and Villa, 2008). We confirmed that the RS-MAT model is likely to be more efficient in integrating and transmitting the temporal structure embedded in the external input. We observed that this structure could be propagated not only up to the 10th layer but in some cases it was retained better beyond the 4th downstream layers. This study suggests that diverging/converging network structures, by the propagation of synfire activity, could play a key role in the transmission of complex temporal patterns of discharges associated to deterministic nonlinear activity. This article is part of a Special Issue entitled Neural Coding. Copyright © 2011 Elsevier B.V. All rights reserved.

  12. Self-organization of neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Clark, J.W.; Winston, J.V.; Rafelski, J.

    1984-05-14

    The plastic development of a neural-network model operating autonomously in discrete time is described by the temporal modification of interneuronal coupling strengths according to momentary neural activity. A simple algorithm (brainwashing) is found which, applied to nets with initially quasirandom connectivity, leads to model networks with properties conducive to the simulation of memory and learning phenomena. 18 references, 2 figures.

  13. Recognizing changing seasonal patterns using neural networks

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans); G. Draisma (Gerrit)

    1997-01-01

    textabstractIn this paper we propose a graphical method based on an artificial neural network model to investigate how and when seasonal patterns in macroeconomic time series change over time. Neural networks are useful since the hidden layer units may become activated only in certain seasons or

  14. Neural networks for triggering

    Energy Technology Data Exchange (ETDEWEB)

    Denby, B. (Fermi National Accelerator Lab., Batavia, IL (USA)); Campbell, M. (Michigan Univ., Ann Arbor, MI (USA)); Bedeschi, F. (Istituto Nazionale di Fisica Nucleare, Pisa (Italy)); Chriss, N.; Bowers, C. (Chicago Univ., IL (USA)); Nesti, F. (Scuola Normale Superiore, Pisa (Italy))

    1990-01-01

    Two types of neural network beauty trigger architectures, based on identification of electrons in jets and recognition of secondary vertices, have been simulated in the environment of the Fermilab CDF experiment. The efficiencies for B's and rejection of background obtained are encouraging. If hardware tests are successful, the electron identification architecture will be tested in the 1991 run of CDF. 10 refs., 5 figs., 1 tab.

  15. Global Mittag-Leffler synchronization of fractional-order neural networks with discontinuous activations.

    Science.gov (United States)

    Ding, Zhixia; Shen, Yi; Wang, Leimin

    2016-01-01

    This paper is concerned with the global Mittag-Leffler synchronization for a class of fractional-order neural networks with discontinuous activations (FNNDAs). We give the concept of Filippov solution for FNNDAs in the sense of Caputo's fractional derivation. By using a singular Gronwall inequality and the properties of fractional calculus, the existence of global solution under the framework of Filippov for FNNDAs is proved. Based on the nonsmooth analysis and control theory, some sufficient criteria for the global Mittag-Leffler synchronization of FNNDAs are derived by designing a suitable controller. The proposed results enrich and enhance the previous reports. Finally, one numerical example is given to demonstrate the effectiveness of the theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Anti-glycated activity prediction of polysaccharides from two guava fruits using artificial neural networks.

    Science.gov (United States)

    Yan, Chunyan; Lee, Jinsheng; Kong, Fansheng; Zhang, Dezhi

    2013-10-15

    High-efficiency ultrasonic treatment was used to extract the polysaccharides of Psidium guajava (PPG) and Psidium littorale (PPL). The aims of this study were to compare polysaccharide activities from these two guavas, as well as to investigate the relationship between ultrasonic conditions and anti-glycated activity. A mathematical model of anti-glycated activity was constructed with the artificial neural network (ANN) toolbox of MATLAB software. Response surface plots showed the correlation between ultrasonic conditions and bioactivity. The optimal ultrasonic conditions of PPL for the highest anti-glycated activity were predicted to be 256 W, 60 °C, and 12 min, and the predicted activity was 42.2%. The predicted highest anti-glycated activity of PPG was 27.2% under its optimal predicted ultrasonic condition. The experimental result showed that PPG and PPL possessed anti-glycated and antioxidant activities, and those of PPL were greater. The experimental data also indicated that ANN had good prediction and optimization capability. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. A neural network-based electromyography motion classifier for upper limb activities

    Directory of Open Access Journals (Sweden)

    Karan Veer

    2016-11-01

    Full Text Available The objective of the work is to investigate the classification of different movements based on the surface electromyogram (SEMG pattern recognition method. The testing was conducted for four arm movements using several experiments with artificial neural network classification scheme. Six time domain features were extracted and consequently classification was implemented using back propagation neural classifier (BPNC. Further, the realization of projected network was verified using cross validation (CV process; hence ANOVA algorithm was carried out. Performance of the network is analyzed by considering mean square error (MSE value. A comparison was performed between the extracted features and back propagation network results reported in the literature. The concurrent result indicates the significance of proposed network with classification accuracy (CA of 100% recorded from two channels, while analysis of variance technique helps in investigating the effectiveness of classified signal for recognition tasks.

  18. From baseline to epileptiform activity: A path to synchronized rhythmicity in large-scale neural networks

    Science.gov (United States)

    Shusterman, Vladimir; Troy, William C.

    2008-06-01

    In large-scale neural networks in the brain the emergence of global behavioral patterns, manifested by electroencephalographic activity, is driven by the self-organization of local neuronal groups into synchronously functioning ensembles. However, the laws governing such macrobehavior and its disturbances, in particular epileptic seizures, are poorly understood. Here we use a mean-field population network model to describe a state of baseline physiological activity and the transition from the baseline state to rhythmic epileptiform activity. We describe principles which explain how this rhythmic activity arises in the form of spatially uniform self-sustained synchronous oscillations. In addition, we show how the rate of migration of the leading edge of the synchronous oscillations can be theoretically predicted, and compare the accuracy of this prediction with that measured experimentally using multichannel electrocorticographic recordings obtained from a human subject experiencing epileptic seizures. The comparison shows that the experimentally measured rate of migration of the leading edge of synchronous oscillations is within the theoretically predicted range of values. Computer simulations have been performed to investigate the interactions between different regions of the brain and to show how organization in one spatial region can promote or inhibit organization in another. Our theoretical predictions are also consistent with the results of functional magnetic resonance imaging (fMRI), in particular with observations that lower-frequency electroencephalographic (EEG) rhythms entrain larger areas of the brain than higher-frequency rhythms. These findings advance the understanding of functional behavior of interconnected populations and might have implications for the analysis of diverse classes of networks.

  19. Development of a computational model on the neural activity patterns of a visual working memory in a hierarchical feedforward Network

    Science.gov (United States)

    An, Soyoung; Choi, Woochul; Paik, Se-Bum

    2015-11-01

    Understanding the mechanism of information processing in the human brain remains a unique challenge because the nonlinear interactions between the neurons in the network are extremely complex and because controlling every relevant parameter during an experiment is difficult. Therefore, a simulation using simplified computational models may be an effective approach. In the present study, we developed a general model of neural networks that can simulate nonlinear activity patterns in the hierarchical structure of a neural network system. To test our model, we first examined whether our simulation could match the previously-observed nonlinear features of neural activity patterns. Next, we performed a psychophysics experiment for a simple visual working memory task to evaluate whether the model could predict the performance of human subjects. Our studies show that the model is capable of reproducing the relationship between memory load and performance and may contribute, in part, to our understanding of how the structure of neural circuits can determine the nonlinear neural activity patterns in the human brain.

  20. Classification of human activity on water through micro-Dopplers using deep convolutional neural networks

    Science.gov (United States)

    Kim, Youngwook; Moon, Taesup

    2016-05-01

    Detecting humans and classifying their activities on the water has significant applications for surveillance, border patrols, and rescue operations. When humans are illuminated by radar signal, they produce micro-Doppler signatures due to moving limbs. There has been a number of research into recognizing humans on land by their unique micro-Doppler signatures, but there is scant research into detecting humans on water. In this study, we investigate the micro-Doppler signatures of humans on water, including a swimming person, a swimming person pulling a floating object, and a rowing person in a small boat. The measured swimming styles were free stroke, backstroke, and breaststroke. Each activity was observed to have a unique micro-Doppler signature. Human activities were classified based on their micro-Doppler signatures. For the classification, we propose to apply deep convolutional neural networks (DCNN), a powerful deep learning technique. Rather than using conventional supervised learning that relies on handcrafted features, we present an alternative deep learning approach. We apply the DCNN, one of the most successful deep learning algorithms for image recognition, directly to a raw micro-Doppler spectrogram of humans on the water. Without extracting any explicit features from the micro-Dopplers, the DCNN can learn the necessary features and build classification boundaries using the training data. We show that the DCNN can achieve accuracy of more than 87.8% for activity classification using 5- fold cross validation.

  1. Multistability of neural networks with discontinuous non-monotonic piecewise linear activation functions and time-varying delays.

    Science.gov (United States)

    Nie, Xiaobing; Zheng, Wei Xing

    2015-05-01

    This paper is concerned with the problem of coexistence and dynamical behaviors of multiple equilibrium points for neural networks with discontinuous non-monotonic piecewise linear activation functions and time-varying delays. The fixed point theorem and other analytical tools are used to develop certain sufficient conditions that ensure that the n-dimensional discontinuous neural networks with time-varying delays can have at least 5(n) equilibrium points, 3(n) of which are locally stable and the others are unstable. The importance of the derived results is that it reveals that the discontinuous neural networks can have greater storage capacity than the continuous ones. Moreover, different from the existing results on multistability of neural networks with discontinuous activation functions, the 3(n) locally stable equilibrium points obtained in this paper are located in not only saturated regions, but also unsaturated regions, due to the non-monotonic structure of discontinuous activation functions. A numerical simulation study is conducted to illustrate and support the derived theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Demystifying Multitask Deep Neural Networks for Quantitative Structure-Activity Relationships.

    Science.gov (United States)

    Xu, Yuting; Ma, Junshui; Liaw, Andy; Sheridan, Robert P; Svetnik, Vladimir

    2017-10-23

    Deep neural networks (DNNs) are complex computational models that have found great success in many artificial intelligence applications, such as computer vision1,2 and natural language processing.3,4 In the past four years, DNNs have also generated promising results for quantitative structure-activity relationship (QSAR) tasks.5,6 Previous work showed that DNNs can routinely make better predictions than traditional methods, such as random forests, on a diverse collection of QSAR data sets. It was also found that multitask DNN models-those trained on and predicting multiple QSAR properties simultaneously-outperform DNNs trained separately on the individual data sets in many, but not all, tasks. To date there has been no satisfactory explanation of why the QSAR of one task embedded in a multitask DNN can borrow information from other unrelated QSAR tasks. Thus, using multitask DNNs in a way that consistently provides a predictive advantage becomes a challenge. In this work, we explored why multitask DNNs make a difference in predictive performance. Our results show that during prediction a multitask DNN does borrow "signal" from molecules with similar structures in the training sets of the other tasks. However, whether this borrowing leads to better or worse predictive performance depends on whether the activities are correlated. On the basis of this, we have developed a strategy to use multitask DNNs that incorporate prior domain knowledge to select training sets with correlated activities, and we demonstrate its effectiveness on several examples.

  3. Artificial neural network optimization of Althaea rosea seeds polysaccharides and its antioxidant activity.

    Science.gov (United States)

    Liu, Feng; Liu, Wenhui; Tian, Shuge

    2014-09-01

    A combination of an orthogonal L16(4)4 test design and a three-layer artificial neural network (ANN) model was applied to optimize polysaccharides from Althaea rosea seeds extracted by hot water method. The highest optimal experimental yield of A. rosea seed polysaccharides (ARSPs) of 59.85 mg/g was obtained using three extraction numbers, 113 min extraction time, 60.0% ethanol concentration, and 1:41 solid-liquid ratio. Under these optimized conditions, the ARSP experimental yield was very close to the predicted yield of 60.07 mg/g and was higher than the orthogonal test results (40.86 mg/g). Structural characterizations were conducted using physicochemical property and FTIR analysis. In addition, the study of ARSP antioxidant activity demonstrated that polysaccharides exhibited high superoxide dismutase activity, strong reducing power, and positive scavenging activity on superoxide anion, hydroxyl radical, 2,2-diphenyl-1-picrylhydrazyl, and reducing power. Our results indicated that ANNs were efficient quantitative tools for predicting the total ARSP content. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Cognitive emotion regulation in children: Reappraisal of emotional faces modulates neural source activity in a frontoparietal network.

    Science.gov (United States)

    Wessing, Ida; Rehbein, Maimu A; Romer, Georg; Achtergarde, Sandra; Dobel, Christian; Zwitserlood, Pienie; Fürniss, Tilman; Junghöfer, Markus

    2015-06-01

    Emotion regulation has an important role in child development and psychopathology. Reappraisal as cognitive regulation technique can be used effectively by children. Moreover, an ERP component known to reflect emotional processing called late positive potential (LPP) can be modulated by children using reappraisal and this modulation is also related to children's emotional adjustment. The present study seeks to elucidate the neural generators of such LPP effects. To this end, children aged 8-14 years reappraised emotional faces, while neural activity in an LPP time window was estimated using magnetoencephalography-based source localization. Additionally, neural activity was correlated with two indexes of emotional adjustment and age. Reappraisal reduced activity in the left dorsolateral prefrontal cortex during down-regulation and enhanced activity in the right parietal cortex during up-regulation. Activity in the visual cortex decreased with increasing age, more adaptive emotion regulation and less anxiety. Results demonstrate that reappraisal changed activity within a frontoparietal network in children. Decreasing activity in the visual cortex with increasing age is suggested to reflect neural maturation. A similar decrease with adaptive emotion regulation and less anxiety implies that better emotional adjustment may be associated with an advance in neural maturation. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. Cognitive emotion regulation in children: Reappraisal of emotional faces modulates neural source activity in a frontoparietal network

    Directory of Open Access Journals (Sweden)

    Ida Wessing

    2015-06-01

    Full Text Available Emotion regulation has an important role in child development and psychopathology. Reappraisal as cognitive regulation technique can be used effectively by children. Moreover, an ERP component known to reflect emotional processing called late positive potential (LPP can be modulated by children using reappraisal and this modulation is also related to children's emotional adjustment. The present study seeks to elucidate the neural generators of such LPP effects. To this end, children aged 8–14 years reappraised emotional faces, while neural activity in an LPP time window was estimated using magnetoencephalography-based source localization. Additionally, neural activity was correlated with two indexes of emotional adjustment and age. Reappraisal reduced activity in the left dorsolateral prefrontal cortex during down-regulation and enhanced activity in the right parietal cortex during up-regulation. Activity in the visual cortex decreased with increasing age, more adaptive emotion regulation and less anxiety. Results demonstrate that reappraisal changed activity within a frontoparietal network in children. Decreasing activity in the visual cortex with increasing age is suggested to reflect neural maturation. A similar decrease with adaptive emotion regulation and less anxiety implies that better emotional adjustment may be associated with an advance in neural maturation.

  6. Neural Networks for Optimal Control

    DEFF Research Database (Denmark)

    Sørensen, O.

    1995-01-01

    Two neural networks are trained to act as an observer and a controller, respectively, to control a non-linear, multi-variable process.......Two neural networks are trained to act as an observer and a controller, respectively, to control a non-linear, multi-variable process....

  7. Fault Diagnosis Based on Chemical Sensor Data with an Active Deep Neural Network.

    Science.gov (United States)

    Jiang, Peng; Hu, Zhixin; Liu, Jun; Yu, Shanen; Wu, Feng

    2016-10-13

    Big sensor data provide significant potential for chemical fault diagnosis, which involves the baseline values of security, stability and reliability in chemical processes. A deep neural network (DNN) with novel active learning for inducing chemical fault diagnosis is presented in this study. It is a method using large amount of chemical sensor data, which is a combination of deep learning and active learning criterion to target the difficulty of consecutive fault diagnosis. DNN with deep architectures, instead of shallow ones, could be developed through deep learning to learn a suitable feature representation from raw sensor data in an unsupervised manner using stacked denoising auto-encoder (SDAE) and work through a layer-by-layer successive learning process. The features are added to the top Softmax regression layer to construct the discriminative fault characteristics for diagnosis in a supervised manner. Considering the expensive and time consuming labeling of sensor data in chemical applications, in contrast to the available methods, we employ a novel active learning criterion for the particularity of chemical processes, which is a combination of Best vs. Second Best criterion (BvSB) and a Lowest False Positive criterion (LFP), for further fine-tuning of diagnosis model in an active manner rather than passive manner. That is, we allow models to rank the most informative sensor data to be labeled for updating the DNN parameters during the interaction phase. The effectiveness of the proposed method is validated in two well-known industrial datasets. Results indicate that the proposed method can obtain superior diagnosis accuracy and provide significant performance improvement in accuracy and false positive rate with less labeled chemical sensor data by further active learning compared with existing methods.

  8. Fault Diagnosis Based on Chemical Sensor Data with an Active Deep Neural Network

    Science.gov (United States)

    Jiang, Peng; Hu, Zhixin; Liu, Jun; Yu, Shanen; Wu, Feng

    2016-01-01

    Big sensor data provide significant potential for chemical fault diagnosis, which involves the baseline values of security, stability and reliability in chemical processes. A deep neural network (DNN) with novel active learning for inducing chemical fault diagnosis is presented in this study. It is a method using large amount of chemical sensor data, which is a combination of deep learning and active learning criterion to target the difficulty of consecutive fault diagnosis. DNN with deep architectures, instead of shallow ones, could be developed through deep learning to learn a suitable feature representation from raw sensor data in an unsupervised manner using stacked denoising auto-encoder (SDAE) and work through a layer-by-layer successive learning process. The features are added to the top Softmax regression layer to construct the discriminative fault characteristics for diagnosis in a supervised manner. Considering the expensive and time consuming labeling of sensor data in chemical applications, in contrast to the available methods, we employ a novel active learning criterion for the particularity of chemical processes, which is a combination of Best vs. Second Best criterion (BvSB) and a Lowest False Positive criterion (LFP), for further fine-tuning of diagnosis model in an active manner rather than passive manner. That is, we allow models to rank the most informative sensor data to be labeled for updating the DNN parameters during the interaction phase. The effectiveness of the proposed method is validated in two well-known industrial datasets. Results indicate that the proposed method can obtain superior diagnosis accuracy and provide significant performance improvement in accuracy and false positive rate with less labeled chemical sensor data by further active learning compared with existing methods. PMID:27754386

  9. Neural Networks in Control Applications

    DEFF Research Database (Denmark)

    Sørensen, O.

    The intention of this report is to make a systematic examination of the possibilities of applying neural networks in those technical areas, which are familiar to a control engineer. In other words, the potential of neural networks in control applications is given higher priority than a detailed...... examined, and it appears that considering 'normal' neural network models with, say, 500 samples, the problem of over-fitting is neglible, and therefore it is not taken into consideration afterwards. Numerous model types, often met in control applications, are implemented as neural network models...... Kalmann filter) representing state space description. The potentials of neural networks for control of non-linear processes are also examined, focusing on three different groups of control concepts, all considered as generalizations of known linear control concepts to handle also non-linear processes...

  10. Active Noise Control Using a Functional Link Artificial Neural Network with the Simultaneous Perturbation Learning Rule

    Directory of Open Access Journals (Sweden)

    Ya-li Zhou

    2009-01-01

    Full Text Available In practical active noise control (ANC systems, the primary path and the secondary path may be nonlinear and time-varying. It has been reported that the linear techniques used to control such ANC systems exhibit degradation in performance. In addition, the actuators of an ANC system very often have nonminimum-phase response. A linear controller under such situations yields poor performance. A novel functional link artificial neural network (FLANN-based simultaneous perturbation stochastic approximation (SPSA algorithm, which functions as a nonlinear mode-free (MF controller, is proposed in this paper. Computer simulations have been carried out to demonstrate that the proposed algorithm outperforms the standard filtered-x least mean square (FXLMS algorithm, and performs better than the recently proposed filtered-s least mean square (FSLMS algorithm when the secondary path is time-varying. This observation implies that the SPSA-based MF controller can eliminate the need of the modeling of the secondary path for the ANC system.

  11. An Optoelectronic Neural Network

    Science.gov (United States)

    Neil, Mark A. A.; White, Ian H.; Carroll, John E.

    1990-02-01

    We describe and present results of an optoelectronic neural network processing system. The system uses an algorithm based on the Hebbian learning rule to memorise a set of associated vector pairs. Recall occurs by the processing of the input vector with these stored associations in an incoherent optical vector multiplier using optical polarisation rotating liquid crystal spatial light modulators to store the vectors and an optical polarisation shadow casting technique to perform multiplications. Results are detected on a photodiode array and thresholded electronically by a controlling microcomputer. The processor is shown to work in autoassociative and heteroassociative modes with up to 10 stored memory vectors of length 64 (equivalent to 64 neurons) and a cycle time of 50ms. We discuss the limiting factors at work in this system, how they affect its scalability and the general applicability of its principles to other systems.

  12. Neural Networks in Control Applications

    DEFF Research Database (Denmark)

    Sørensen, O.

    The intention of this report is to make a systematic examination of the possibilities of applying neural networks in those technical areas, which are familiar to a control engineer. In other words, the potential of neural networks in control applications is given higher priority than a detailed...... study of the networks themselves. With this end in view the following restrictions have been made: - Amongst numerous neural network structures, only the Multi Layer Perceptron (a feed-forward network) is applied. - Amongst numerous training algorithms, only four algorithms are examined, all...... in a recursive form (sample updating). The simplest is the Back Probagation Error Algorithm, and the most complex is the recursive Prediction Error Method using a Gauss-Newton search direction. - Over-fitting is often considered to be a serious problem when training neural networks. This problem is specifically...

  13. Neural Networks in Control Applications

    DEFF Research Database (Denmark)

    Sørensen, O.

    simulated process and compared. The closing chapter describes some practical experiments, where the different control concepts and training methods are tested on the same practical process operating in very noisy environments. All tests confirm that neural networks also have the potential to be trained......The intention of this report is to make a systematic examination of the possibilities of applying neural networks in those technical areas, which are familiar to a control engineer. In other words, the potential of neural networks in control applications is given higher priority than a detailed...... study of the networks themselves. With this end in view the following restrictions have been made: - Amongst numerous neural network structures, only the Multi Layer Perceptron (a feed-forward network) is applied. - Amongst numerous training algorithms, only four algorithms are examined, all...

  14. Segmentation of magnetic resonance images using a combination of neural networks and active contour models.

    Science.gov (United States)

    Middleton, Ian; Damper, Robert I

    2004-01-01

    Segmentation of medical images is very important for clinical research and diagnosis, leading to a requirement for robust automatic methods. This paper reports on the combined use of a neural network (a multilayer perceptron, MLP) and active contour model ('snake') to segment structures in magnetic resonance (MR) images. The perceptron is trained to produce a binary classification of each pixel as either a boundary or a non-boundary point. Subsequently, the resulting binary (edge-point) image forms the external energy function for a snake, used to link the candidate boundary points into a continuous, closed contour. We report here on the segmentation of the lungs from multiple MR slices of the torso; lung-specific constraints have been avoided to keep the technique as general as possible. In initial investigations, the inputs to the MLP were limited to normalised intensity values of the pixels from an (7 x 7) window scanned across the image. The use of spatial coordinates as additional inputs to the MLP is then shown to provide an improvement in segmentation performance as quantified using the effectiveness measure (a weighted product of precision and recall). Training sets were first developed using a lengthy iterative process. Thereafter, a novel cost function based on effectiveness is proposed for training that allows us to achieve dramatic improvements in segmentation performance, as well as faster, non-iterative selection of training examples. The classifications produced using this cost function were sufficiently good that the binary image produced by the MLP could be post-processed using an active contour model to provide an accurate segmentation of the lungs from the multiple slices in almost all cases, including unseen slices and subjects.

  15. Neural-like growing networks

    Science.gov (United States)

    Yashchenko, Vitaliy A.

    2000-03-01

    On the basis of the analysis of scientific ideas reflecting the law in the structure and functioning the biological structures of a brain, and analysis and synthesis of knowledge, developed by various directions in Computer Science, also there were developed the bases of the theory of a new class neural-like growing networks, not having the analogue in world practice. In a base of neural-like growing networks the synthesis of knowledge developed by classical theories - semantic and neural of networks is. The first of them enable to form sense, as objects and connections between them in accordance with construction of the network. With thus each sense gets a separate a component of a network as top, connected to other tops. In common it quite corresponds to structure reflected in a brain, where each obvious concept is presented by certain structure and has designating symbol. Secondly, this network gets increased semantic clearness at the expense owing to formation not only connections between neural by elements, but also themselves of elements as such, i.e. here has a place not simply construction of a network by accommodation sense structures in environment neural of elements, and purely creation of most this environment, as of an equivalent of environment of memory. Thus neural-like growing networks are represented by the convenient apparatus for modeling of mechanisms of teleological thinking, as a fulfillment of certain psychophysiological of functions.

  16. Neural network based forward prediction of bladder pressure using pudendal nerve electrical activity.

    Science.gov (United States)

    Geramipour, A; Makki, S; Erfanian, A

    2015-01-01

    Individuals with spinal cord injury or neurological disorders have problems in urinary bladder storage and in voiding function. In these people, the detrusor of bladder contracts at low volume and this causes incontinence. The goal of bladder control is to increase the bladder capacity by electrical stimulation of relative nerves such as pelvic nerves, sacral nerve roots or pudendal nerves. For this purpose, the bladder pressure has to be monitored continuously. In this paper, we propose a method for real-time estimating the bladder pressure using artificial neural network. The method is based upon measurements of electroneurogram (ENG) signal of pudendal nerve. This approach yields synthetic bladder pressure estimates during bladder contraction. The experiments were conducted on three rats. The results show that neural predictor can provide accurate estimation and prediction of bladder pressure with good generalization ability. The average error of 1-second and 5-second ahead prediction of bladder pressure are 9.62% and 10.54%, respectively.

  17. Artificial neural networks from MATLAB in medicinal chemistry. Bayesian-regularized genetic neural networks (BRGNN): application to the prediction of the antagonistic activity against human platelet thrombin receptor (PAR-1).

    Science.gov (United States)

    Caballero, Julio; Fernández, Michael

    2008-01-01

    Artificial neural networks (ANNs) have been widely used for medicinal chemistry modeling. In the last two decades, too many reports used MATLAB environment as an adequate platform for programming ANNs. Some of these reports comprise a variety of applications intended to quantitatively or qualitatively describe structure-activity relationships. A powerful tool is obtained when there are combined Bayesian-regularized neural networks (BRANNs) and genetic algorithm (GA): Bayesian-regularized genetic neural networks (BRGNNs). BRGNNs can model complicated relationships between explanatory variables and dependent variables. Thus, this methodology is regarded as useful tool for QSAR analysis. In order to demonstrate the use of BRGNNs, we developed a reliable method for predicting the antagonistic activity of 5-amino-3-arylisoxazole derivatives against Human Platelet Thrombin Receptor (PAR-1), using classical 3D-QSAR methodologies: Comparative Molecular Field Analysis (CoMFA) and Comparative Molecular Similarity Indices Analysis (CoMSIA). In addition, 3D vectors generated from the molecular structures were correlated with antagonistic activities by multivariate linear regression (MLR) and Bayesian-regularized neural networks (BRGNNs). All models were trained with 34 compounds, after which they were evaluated for predictive ability with additional 6 compounds. CoMFA and CoMSIA were unable to describe this structure-activity relationship, while BRGNN methodology brings the best results according to validation statistics.

  18. Genetic neural networks for quantitative structure-activity relationships: improvements and application of benzodiazepine affinity for benzodiazepine/GABAA receptors.

    Science.gov (United States)

    So, S S; Karplus, M

    1996-12-20

    A novel tool, called a genetic neural network (GNN), has been developed for obtaining quantitative structure-activity relationships (QSAR) for high-dimensional data sets (J. Med. Chem. 1996, 39, 1521-1530). The GNN method uses a neural network to correlate activity with descriptors that are preselected by a genetic algorithm. To provide an extended test of the GNN method, the data on 57 benzodiazepines given by Maddalena and Johnston (MJ; J. Med. Chem. 1995, 38, 715-724) have been examined with an enhanced version of GNN, and the results are compared with the excellent QSAR of MJ. The problematic steepest descent training has been replaced by the scaled conjugate gradient algorithm. This leads to a substantial gain in performance in both robustness of prediction and speed of computation. The cross-validation GNN simulation and the subsequent run based on an unbiased and more efficient protocol led to the discovery of other 10-descriptor QSARs that are superior to the best model of MJ based on backward elimination selection and neural network training. Results from a series of GNNs with a different number of inputs showed that a neural network with fewer inputs can produce QSARs as good as or even better than those with higher dimensions. The top-ranking models from a GNN simulation using only six input descriptors are presented, and the chemical significance of the chosen descriptors is discussed. The statistical significance of these GNN QSARs is validated. The best QSARs are used to provide a graphical tool that aids the design of new drug analogues. By replacing functional groups at the 7- and 2'-positions with ones that have optimal substituent parameters, a number of new benzodiazepines with high potency are predicted.

  19. Artificial Neural Networks·

    Indian Academy of Sciences (India)

    differences between biological neural networks (BNNs) of the brain and ANN s. A thorough understanding of ... neurons. Artificial neural models are loosely based on biology since a complete understanding of the .... A learning scheme for updating a neuron's connections (weights) was proposed by Donald Hebb in 1949.

  20. Neural networks and statistical learning

    CERN Document Server

    Du, Ke-Lin

    2014-01-01

    Providing a broad but in-depth introduction to neural network and machine learning in a statistical framework, this book provides a single, comprehensive resource for study and further research. All the major popular neural network models and statistical learning approaches are covered with examples and exercises in every chapter to develop a practical working understanding of the content. Each of the twenty-five chapters includes state-of-the-art descriptions and important research results on the respective topics. The broad coverage includes the multilayer perceptron, the Hopfield network, associative memory models, clustering models and algorithms, the radial basis function network, recurrent neural networks, principal component analysis, nonnegative matrix factorization, independent component analysis, discriminant analysis, support vector machines, kernel methods, reinforcement learning, probabilistic and Bayesian networks, data fusion and ensemble learning, fuzzy sets and logic, neurofuzzy models, hardw...

  1. Memristor-based neural networks

    Science.gov (United States)

    Thomas, Andy

    2013-03-01

    The synapse is a crucial element in biological neural networks, but a simple electronic equivalent has been absent. This complicates the development of hardware that imitates biological architectures in the nervous system. Now, the recent progress in the experimental realization of memristive devices has renewed interest in artificial neural networks. The resistance of a memristive system depends on its past states and exactly this functionality can be used to mimic the synaptic connections in a (human) brain. After a short introduction to memristors, we present and explain the relevant mechanisms in a biological neural network, such as long-term potentiation and spike time-dependent plasticity, and determine the minimal requirements for an artificial neural network. We review the implementations of these processes using basic electric circuits and more complex mechanisms that either imitate biological systems or could act as a model system for them.

  2. Pansharpening by Convolutional Neural Networks

    National Research Council Canada - National Science Library

    Masi, Giuseppe; Cozzolino, Davide; Verdoliva, Luisa; Scarpa, Giuseppe

    2016-01-01

    A new pansharpening method is proposed, based on convolutional neural networks. We adapt a simple and effective three-layer architecture recently proposed for super-resolution to the pansharpening problem...

  3. Bayesian regularization of neural networks.

    Science.gov (United States)

    Burden, Frank; Winkler, Dave

    2008-01-01

    Bayesian regularized artificial neural networks (BRANNs) are more robust than standard back-propagation nets and can reduce or eliminate the need for lengthy cross-validation. Bayesian regularization is a mathematical process that converts a nonlinear regression into a "well-posed" statistical problem in the manner of a ridge regression. The advantage of BRANNs is that the models are robust and the validation process, which scales as O(N2) in normal regression methods, such as back propagation, is unnecessary. These networks provide solutions to a number of problems that arise in QSAR modeling, such as choice of model, robustness of model, choice of validation set, size of validation effort, and optimization of network architecture. They are difficult to overtrain, since evidence procedures provide an objective Bayesian criterion for stopping training. They are also difficult to overfit, because the BRANN calculates and trains on a number of effective network parameters or weights, effectively turning off those that are not relevant. This effective number is usually considerably smaller than the number of weights in a standard fully connected back-propagation neural net. Automatic relevance determination (ARD) of the input variables can be used with BRANNs, and this allows the network to "estimate" the importance of each input. The ARD method ensures that irrelevant or highly correlated indices used in the modeling are neglected as well as showing which are the most important variables for modeling the activity data. This chapter outlines the equations that define the BRANN method plus a flowchart for producing a BRANN-QSAR model. Some results of the use of BRANNs on a number of data sets are illustrated and compared with other linear and nonlinear models.

  4. What are artificial neural networks?

    DEFF Research Database (Denmark)

    Krogh, Anders

    2008-01-01

    Artificial neural networks have been applied to problems ranging from speech recognition to prediction of protein secondary structure, classification of cancers and gene prediction. How do they work and what might they be good for? Udgivelsesdato: 2008-Feb......Artificial neural networks have been applied to problems ranging from speech recognition to prediction of protein secondary structure, classification of cancers and gene prediction. How do they work and what might they be good for? Udgivelsesdato: 2008-Feb...

  5. Biologically Inspired Modular Neural Networks

    OpenAIRE

    Azam, Farooq

    2000-01-01

    This dissertation explores the modular learning in artificial neural networks that mainly driven by the inspiration from the neurobiological basis of the human learning. The presented modularization approaches to the neural network design and learning are inspired by the engineering, complexity, psychological and neurobiological aspects. The main theme of this dissertation is to explore the organization and functioning of the brain to discover new structural and learning ...

  6. Artificial astrocytes improve neural network performance.

    Science.gov (United States)

    Porto-Pazos, Ana B; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-04-19

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function.

  7. Complex-Valued Neural Networks

    CERN Document Server

    Hirose, Akira

    2012-01-01

    This book is the second enlarged and revised edition of the first successful monograph on complex-valued neural networks (CVNNs) published in 2006, which lends itself to graduate and undergraduate courses in electrical engineering, informatics, control engineering, mechanics, robotics, bioengineering, and other relevant fields. In the second edition the recent trends in CVNNs research are included, resulting in e.g. almost a doubled number of references. The parametron invented in 1954 is also referred to with discussion on analogy and disparity. Also various additional arguments on the advantages of the complex-valued neural networks enhancing the difference to real-valued neural networks are given in various sections. The book is useful for those beginning their studies, for instance, in adaptive signal processing for highly functional sensing and imaging, control in unknown and changing environment, robotics inspired by human neural systems, and brain-like information processing, as well as interdisciplina...

  8. Prototype-Incorporated Emotional Neural Network.

    Science.gov (United States)

    Oyedotun, Oyebade K; Khashman, Adnan

    2017-08-15

    Artificial neural networks (ANNs) aim to simulate the biological neural activities. Interestingly, many ''engineering'' prospects in ANN have relied on motivations from cognition and psychology studies. So far, two important learning theories that have been subject of active research are the prototype and adaptive learning theories. The learning rules employed for ANNs can be related to adaptive learning theory, where several examples of the different classes in a task are supplied to the network for adjusting internal parameters. Conversely, the prototype-learning theory uses prototypes (representative examples); usually, one prototype per class of the different classes contained in the task. These prototypes are supplied for systematic matching with new examples so that class association can be achieved. In this paper, we propose and implement a novel neural network algorithm based on modifying the emotional neural network (EmNN) model to unify the prototype- and adaptive-learning theories. We refer to our new model as ``prototype-incorporated EmNN''. Furthermore, we apply the proposed model to two real-life challenging tasks, namely, static hand-gesture recognition and face recognition, and compare the result to those obtained using the popular back-propagation neural network (BPNN), emotional BPNN (EmNN), deep networks, an exemplar classification model, and k-nearest neighbor.

  9. Fractional Hopfield Neural Networks: Fractional Dynamic Associative Recurrent Neural Networks.

    Science.gov (United States)

    Pu, Yi-Fei; Yi, Zhang; Zhou, Ji-Liu

    2017-10-01

    This paper mainly discusses a novel conceptual framework: fractional Hopfield neural networks (FHNN). As is commonly known, fractional calculus has been incorporated into artificial neural networks, mainly because of its long-term memory and nonlocality. Some researchers have made interesting attempts at fractional neural networks and gained competitive advantages over integer-order neural networks. Therefore, it is naturally makes one ponder how to generalize the first-order Hopfield neural networks to the fractional-order ones, and how to implement FHNN by means of fractional calculus. We propose to introduce a novel mathematical method: fractional calculus to implement FHNN. First, we implement fractor in the form of an analog circuit. Second, we implement FHNN by utilizing fractor and the fractional steepest descent approach, construct its Lyapunov function, and further analyze its attractors. Third, we perform experiments to analyze the stability and convergence of FHNN, and further discuss its applications to the defense against chip cloning attacks for anticounterfeiting. The main contribution of our work is to propose FHNN in the form of an analog circuit by utilizing a fractor and the fractional steepest descent approach, construct its Lyapunov function, prove its Lyapunov stability, analyze its attractors, and apply FHNN to the defense against chip cloning attacks for anticounterfeiting. A significant advantage of FHNN is that its attractors essentially relate to the neuron's fractional order. FHNN possesses the fractional-order-stability and fractional-order-sensitivity characteristics.

  10. Robust Finite-Time Stabilization of Fractional-Order Neural Networks With Discontinuous and Continuous Activation Functions Under Uncertainty.

    Science.gov (United States)

    Ding, Zhixia; Zeng, Zhigang; Wang, Leimin

    2017-03-10

    This paper is concerned with robust finite-time stabilization for a class of fractional-order neural networks (FNNs) with two types of activation functions (i.e., discontinuous and continuous activation function) under uncertainty. It is worth noting that there exist few results about FNNs with discontinuous activation functions, which is mainly because classical solutions and theories of differential equations cannot be applied in this case. Especially, there is no relevant finite-time stabilization research for such system, and this paper makes up for the gap. The existence of global solution under the framework of Filippov for such system is guaranteed by limiting discontinuous activation functions. According to set-valued analysis and Kakutani's fixed point theorem, we obtain the existence of equilibrium point. In particular, based on differential inclusion theory and fractional Lyapunov stability theory, several new sufficient conditions are given to ensure finite-time stabilization via a novel discontinuous controller, and the upper bound of the settling time for stabilization is estimated. In addition, we analyze the finite-time stabilization of FNNs with Lipschitz-continuous activation functions under uncertainty. The results of this paper improve corresponding ones of integer-order neural networks with discontinuous and continuous activation functions. Finally, three numerical examples are given to show the effectiveness of the theoretical results.

  11. Prediction of activity coefficients at infinite dilution for organic solutes in ionic liquids by artificial neural network

    Energy Technology Data Exchange (ETDEWEB)

    Nami, Faezeh [Department of Chemistry, Shahid Beheshti University, G.C., Evin-Tehran 1983963113 (Iran, Islamic Republic of); Deyhimi, Farzad, E-mail: f-deyhimi@sbu.ac.i [Department of Chemistry, Shahid Beheshti University, G.C., Evin-Tehran 1983963113 (Iran, Islamic Republic of)

    2011-01-15

    To our knowledge, this work illustrates for the first time the ability of artificial neural network (ANN) to predict activity coefficients at infinite dilution for organic solutes in ionic liquids (ILs). Activity coefficient at infinite dilution ({gamma}{sup {infinity}}) is a useful parameter which can be used for the selection of effective solvent in the separation processes. Using a multi-layer feed-forward network with Levenberg-Marquardt optimization algorithm, the resulting ANN model generated activity coefficient at infinite dilution data over a temperature range of 298 to 363 K. The unavailable input data concerning softness (S) of organic compounds (solutes) and dipole moment ({mu}) of ionic liquids were calculated using GAMESS suites of quantum chemistry programs. The resulting ANN model and its validation are based on the investigation of up to 24 structurally different organic compounds (alkanes, alkenes, alkynes, cycloalkanes, aromatics, and alcohols) in 16 common imidazolium-based ionic liquids, at different temperatures within the range of 298 to 363 K (i.e. a total number of 914 {gamma}{sub Solute}{sup {infinity}}for each IL data point). The results show a satisfactory agreement between the predicted ANN and experimental data, where, the root mean square error (RMSE) and the determination coefficient (R{sup 2}) of the designed neural network were found to be 0.103, 0.996 for training data and 0.128, 0.994 for testing data, respectively.

  12. Application of neural networks for unfolding neutron spectra measured by means of Bonner spheres and activation foils

    CERN Document Server

    Braga, C C

    2001-01-01

    A neural network structure has been used for unfolding neutron spectra measured by means of a Bonner Sphere Spectrometer set and a foil activation set using several neutron induced reactions. The present work used the SNNS (Stuttgart Neural Network Simulator) as the interface for designing, training and validation of the Multilayer Perceptron network. The back-propagation algorithm was applied. The Bonner Sphere set chosen has been calibrated at the National Physical Laboratory, United Kingdom, and uses gold activation foils as thermal neutron detectors. The neutron energy covered by the response functions goes from 0.0001 eV to 14 MeV. The foil activation set chosen has been irradiated at the IEA-R1 research reactor and measured at the Nuclear Metrology Laboratory of IPEN-CNEN/SP. Two types of neutron spectra were numerically investigated: monoenergetic and continuous The unfolded spectra were compared to a conventional method using code SAND-II as part of the neutron dosimetry system SAIPS. Good results wer...

  13. Application of artificial neural network in precise prediction of cement elements percentages based on the neutron activation analysis

    Science.gov (United States)

    Eftekhari Zadeh, E.; Feghhi, S. A. H.; Roshani, G. H.; Rezaei, A.

    2016-05-01

    Due to variation of neutron energy spectrum in the target sample during the activation process and to peak overlapping caused by the Compton effect with gamma radiations emitted from activated elements, which results in background changes and consequently complex gamma spectrum during the measurement process, quantitative analysis will ultimately be problematic. Since there is no simple analytical correlation between peaks' counts with elements' concentrations, an artificial neural network for analyzing spectra can be a helpful tool. This work describes a study on the application of a neural network to determine the percentages of cement elements (mainly Ca, Si, Al, and Fe) using the neutron capture delayed gamma-ray spectra of the substance emitted by the activated nuclei as patterns which were simulated via the Monte Carlo N-particle transport code, version 2.7. The Radial Basis Function (RBF) network is developed with four specific peaks related to Ca, Si, Al and Fe, which were extracted as inputs. The proposed RBF model is developed and trained with MATLAB 7.8 software. To obtain the optimal RBF model, several structures have been constructed and tested. The comparison between simulated and predicted values using the proposed RBF model shows that there is a good agreement between them.

  14. Spiking modular neural networks: A neural network modeling approach for hydrological processes

    National Research Council Canada - National Science Library

    Kamban Parasuraman; Amin Elshorbagy; Sean K. Carey

    2006-01-01

    .... In this study, a novel neural network model called the spiking modular neural networks (SMNNs) is proposed. An SMNN consists of an input layer, a spiking layer, and an associator neural network layer...

  15. Antagonistic neural networks underlying differentiated leadership roles

    OpenAIRE

    Richard Eleftherios Boyatzis; Kylie eRochford; Anthony Ian Jack

    2014-01-01

    The emergence of two distinct leadership roles, the task leader and the socio-emotional leader, has been documented in the leadership literature since the 1950’s. Recent research in neuroscience suggests that the division between task oriented and socio-emotional oriented roles derives from a fundamental feature of our neurobiology: an antagonistic relationship between two large-scale cortical networks -- the Task Positive Network (TPN) and the Default Mode Network (DMN). Neural activity in ...

  16. Artificial Neural Networks for Reducing Computational Effort in Active Truncated Model Testing of Mooring Lines

    DEFF Research Database (Denmark)

    Christiansen, Niels Hørbye; Voie, Per Erlend Torbergsen; Høgsberg, Jan Becker

    2015-01-01

    simultaneously, this method is very demanding in terms of numerical efficiency and computational power. Therefore, this method has not yet proved to be feasible. It has recently been shown how a hybrid method combining classical numerical models and artificial neural networks (ANN) can provide a dramatic...... model. Hence, in principal it is possible to achieve reliable experimental data for much larger water depths than what the actual depth of the test basin would suggest. However, since the computations must be faster than real time, as the numerical simulations and the physical experiment run...... reduction in computational effort when performing time domain simulation of mooring lines. The hybrid method uses a classical numerical model to generate simulation data, which are then subsequently used to train the ANN. After successful training the ANN is able to take over the simulation at a speed two...

  17. Prediction of Increasing Production Activities using Combination of Query Aggregation on Complex Events Processing and Neural Network

    Directory of Open Access Journals (Sweden)

    Achmad Arwan

    2016-07-01

    Full Text Available AbstrakProduksi, order, penjualan, dan pengiriman adalah serangkaian event yang saling terkait dalam industri manufaktur. Selanjutnya hasil dari event tersebut dicatat dalam event log. Complex Event Processing adalah metode yang digunakan untuk menganalisis apakah terdapat pola kombinasi peristiwa tertentu (peluang/ancaman yang terjadi pada sebuah sistem, sehingga dapat ditangani secara cepat dan tepat. Jaringan saraf tiruan adalah metode yang digunakan untuk mengklasifikasi data peningkatan proses produksi. Hasil pencatatan rangkaian proses yang menyebabkan peningkatan produksi digunakan sebagai data latih untuk mendapatkan fungsi aktivasi dari jaringan saraf tiruan. Penjumlahan hasil catatan event log dimasukkan ke input jaringan saraf tiruan untuk perhitungan nilai aktivasi. Ketika nilai aktivasi lebih dari batas yang ditentukan, maka sistem mengeluarkan sinyal untuk meningkatkan produksi, jika tidak, sistem tetap memantau kejadian. Hasil percobaan menunjukkan bahwa akurasi dari metode ini adalah 77% dari 39 rangkaian aliran event.Kata kunci: complex event processing, event, jaringan saraf tiruan, prediksi peningkatan produksi, proses. AbstractProductions, orders, sales, and shipments are series of interrelated events within manufacturing industry. Further these events were recorded in the event log. Complex event processing is a method that used to analyze whether there are patterns of combinations of certain events (opportunities / threats that occur in a system, so it can be addressed quickly and appropriately. Artificial neural network is a method that we used to classify production increase activities. The series of events that cause the increase of the production used as a dataset to train the weight of neural network which result activation value. An aggregate stream of events inserted into the neural network input to compute the value of activation. When the value is over a certain threshold (the activation value results

  18. Multigradient for Neural Networks for Equalizers

    Directory of Open Access Journals (Sweden)

    Chulhee Lee

    2003-06-01

    Full Text Available Recently, a new training algorithm, multigradient, has been published for neural networks and it is reported that the multigradient outperforms the backpropagation when neural networks are used as a classifier. When neural networks are used as an equalizer in communications, they can be viewed as a classifier. In this paper, we apply the multigradient algorithm to train the neural networks that are used as equalizers. Experiments show that the neural networks trained using the multigradient noticeably outperforms the neural networks trained by the backpropagation.

  19. Computerized cognitive training restores neural activity within the reality monitoring network in schizophrenia.

    Science.gov (United States)

    Subramaniam, Karuna; Luks, Tracy L; Fisher, Melissa; Simpson, Gregory V; Nagarajan, Srikantan; Vinogradov, Sophia

    2012-02-23

    Schizophrenia patients suffer from severe cognitive deficits, such as impaired reality monitoring. Reality monitoring is the ability to distinguish the source of internal experiences from outside reality. During reality monitoring tasks, schizophrenia patients make errors identifying "I made it up" items, and even during accurate performance, they show abnormally low activation of the medial prefrontal cortex (mPFC), a region that supports self-referential cognition. We administered 80 hr of computerized training of cognitive processes to schizophrenia patients and found improvement in reality monitoring that correlated with increased mPFC activity. In contrast, patients in a computer games control condition did not show any behavioral or neural improvements. Notably, recovery in mPFC activity after training was associated with improved social functioning 6 months later. These findings demonstrate that a serious behavioral deficit in schizophrenia, and its underlying neural dysfunction, can be improved by well-designed computerized cognitive training, resulting in better quality of life. Copyright © 2012 Elsevier Inc. All rights reserved.

  20. Multiprocessor Neural Network in Healthcare.

    Science.gov (United States)

    Godó, Zoltán Attila; Kiss, Gábor; Kocsis, Dénes

    2015-01-01

    A possible way of creating a multiprocessor artificial neural network is by the use of microcontrollers. The RISC processors' high performance and the large number of I/O ports mean they are greatly suitable for creating such a system. During our research, we wanted to see if it is possible to efficiently create interaction between the artifical neural network and the natural nervous system. To achieve as much analogy to the living nervous system as possible, we created a frequency-modulated analog connection between the units. Our system is connected to the living nervous system through 128 microelectrodes. Two-way communication is provided through A/D transformation, which is even capable of testing psychopharmacons. The microcontroller-based analog artificial neural network can play a great role in medical singal processing, such as ECG, EEG etc.

  1. The application of the multi-alternative approach in active neural network models

    Science.gov (United States)

    Podvalny, S.; Vasiljev, E.

    2017-02-01

    The article refers to the construction of intelligent systems based artificial neuron networks are used. We discuss the basic properties of the non-compliance of artificial neuron networks and their biological prototypes. It is shown here that the main reason for these discrepancies is the structural immutability of the neuron network models in the learning process, that is, their passivity. Based on the modern understanding of the biological nervous system as a structured ensemble of nerve cells, it is proposed to abandon the attempts to simulate its work at the level of the elementary neurons functioning processes and proceed to the reproduction of the information structure of data storage and processing on the basis of the general enough evolutionary principles of multialternativity, i.e. the multi-level structural model, diversity and modularity. The implementation method of these principles is offered, using the faceted memory organization in the neuron network with the rearranging active structure. An example of the implementation of the active facet-type neuron network in the intellectual decision-making system in the conditions of critical events development in the electrical distribution system.

  2. Prediction of human actions: expertise and task-related effects on neural activation of the action observation network.

    Science.gov (United States)

    Balser, Nils; Lorey, Britta; Pilgramm, Sebastian; Stark, Rudolf; Bischoff, Matthias; Zentgraf, Karen; Williams, Andrew Mark; Munzert, Jörn

    2014-08-01

    The action observation network (AON) is supposed to play a crucial role when athletes anticipate the effect of others' actions in sports such as tennis. We used functional magnetic resonance imaging to explore whether motor expertise leads to a differential activation pattern within the AON during effect anticipation and whether spatial and motor anticipation tasks are associated with a differential activation pattern within the AON depending on participant expertise level. Expert (N=16) and novice (N=16) tennis players observed video clips depicting forehand strokes with the instruction to either indicate the predicted direction of ball flight (spatial anticipation) or to decide on an appropriate response to the observed action (motor anticipation). The experts performed better than novices on both tennis anticipation tasks, with the experts showing stronger neural activation in areas of the AON, namely, the superior parietal lobe, the intraparietal sulcus, the inferior frontal gyrus, and the cerebellum. When novices were contrasted with experts, motor anticipation resulted in stronger activation of the ventral premotor cortex, the supplementary motor area, and the superior parietal lobe than spatial anticipation task did. In experts, the comparison of motor and spatial anticipation revealed no increased activation. We suggest that the stronger activation of areas in the AON during the anticipation of action effects in experts reflects their use of the more fine-tuned motor representations they have acquired and improved during years of training. Furthermore, results suggest that the neural processing of different anticipation tasks depends on the expertise level. Copyright © 2014 Wiley Periodicals, Inc.

  3. Localizing Tortoise Nests by Neural Networks.

    Directory of Open Access Journals (Sweden)

    Roberto Barbuti

    Full Text Available The goal of this research is to recognize the nest digging activity of tortoises using a device mounted atop the tortoise carapace. The device classifies tortoise movements in order to discriminate between nest digging, and non-digging activity (specifically walking and eating. Accelerometer data was collected from devices attached to the carapace of a number of tortoises during their two-month nesting period. Our system uses an accelerometer and an activity recognition system (ARS which is modularly structured using an artificial neural network and an output filter. For the purpose of experiment and comparison, and with the aim of minimizing the computational cost, the artificial neural network has been modelled according to three different architectures based on the input delay neural network (IDNN. We show that the ARS can achieve very high accuracy on segments of data sequences, with an extremely small neural network that can be embedded in programmable low power devices. Given that digging is typically a long activity (up to two hours, the application of ARS on data segments can be repeated over time to set up a reliable and efficient system, called Tortoise@, for digging activity recognition.

  4. Tampa Electric Neural Network Sootblowing

    Energy Technology Data Exchange (ETDEWEB)

    Mark A. Rhode

    2003-12-31

    Boiler combustion dynamics change continuously due to several factors including coal quality, boiler loading, ambient conditions, changes in slag/soot deposits and the condition of plant equipment. NO{sub x} formation, Particulate Matter (PM) emissions, and boiler thermal performance are directly affected by the sootblowing practices on a unit. As part of its Power Plant Improvement Initiative program, the US DOE is providing cofunding (DE-FC26-02NT41425) and NETL is the managing agency for this project at Tampa Electric's Big Bend Station. This program serves to co-fund projects that have the potential to increase thermal efficiency and reduce emissions from coal-fired utility boilers. A review of the Big Bend units helped identify intelligent sootblowing as a suitable application to achieve the desired objectives. The existing sootblower control philosophy uses sequential schemes, whose frequency is either dictated by the control room operator or is timed based. The intent of this project is to implement a neural network based intelligent soot-blowing system, in conjunction with state-of-the-art controls and instrumentation, to optimize the operation of a utility boiler and systematically control boiler fouling. Utilizing unique, on-line, adaptive technology, operation of the sootblowers can be dynamically controlled based on real-time events and conditions within the boiler. This could be an extremely cost-effective technology, which has the ability to be readily and easily adapted to virtually any pulverized coal fired boiler. Through unique on-line adaptive technology, Neural Network-based systems optimize the boiler operation by accommodating equipment performance changes due to wear and maintenance activities, adjusting to fluctuations in fuel quality, and improving operating flexibility. The system dynamically adjusts combustion setpoints and bias settings in closed-loop supervisory control to simultaneously reduce NO{sub x} emissions and improve heat

  5. Tampa Electric Neural Network Sootblowing

    Energy Technology Data Exchange (ETDEWEB)

    Mark A. Rhode

    2004-09-30

    Boiler combustion dynamics change continuously due to several factors including coal quality, boiler loading, ambient conditions, changes in slag/soot deposits and the condition of plant equipment. NOx formation, Particulate Matter (PM) emissions, and boiler thermal performance are directly affected by the sootblowing practices on a unit. As part of its Power Plant Improvement Initiative program, the US DOE is providing cofunding (DE-FC26-02NT41425) and NETL is the managing agency for this project at Tampa Electric's Big Bend Station. This program serves to co-fund projects that have the potential to increase thermal efficiency and reduce emissions from coal-fired utility boilers. A review of the Big Bend units helped identify intelligent sootblowing as a suitable application to achieve the desired objectives. The existing sootblower control philosophy uses sequential schemes, whose frequency is either dictated by the control room operator or is timed based. The intent of this project is to implement a neural network based intelligent sootblowing system, in conjunction with state-of-the-art controls and instrumentation, to optimize the operation of a utility boiler and systematically control boiler fouling. Utilizing unique, on-line, adaptive technology, operation of the sootblowers can be dynamically controlled based on real-time events and conditions within the boiler. This could be an extremely cost-effective technology, which has the ability to be readily and easily adapted to virtually any pulverized coal fired boiler. Through unique on-line adaptive technology, Neural Network-based systems optimize the boiler operation by accommodating equipment performance changes due to wear and maintenance activities, adjusting to fluctuations in fuel quality, and improving operating flexibility. The system dynamically adjusts combustion setpoints and bias settings in closed-loop supervisory control to simultaneously reduce NO{sub x} emissions and improve heat rate

  6. Tampa Electric Neural Network Sootblowing

    Energy Technology Data Exchange (ETDEWEB)

    Mark A. Rhode

    2004-03-31

    Boiler combustion dynamics change continuously due to several factors including coal quality, boiler loading, ambient conditions, changes in slag/soot deposits and the condition of plant equipment. NOx formation, Particulate Matter (PM) emissions, and boiler thermal performance are directly affected by the sootblowing practices on a unit. As part of its Power Plant Improvement Initiative program, the US DOE is providing co-funding (DE-FC26-02NT41425) and NETL is the managing agency for this project at Tampa Electric's Big Bend Station. This program serves to co-fund projects that have the potential to increase thermal efficiency and reduce emissions from coal-fired utility boilers. A review of the Big Bend units helped identify intelligent sootblowing as a suitable application to achieve the desired objectives. The existing sootblower control philosophy uses sequential schemes, whose frequency is either dictated by the control room operator or is timed based. The intent of this project is to implement a neural network based intelligent sootblowing system, in conjunction with state-of-the-art controls and instrumentation, to optimize the operation of a utility boiler and systematically control boiler fouling. Utilizing unique, on-line, adaptive technology, operation of the sootblowers can be dynamically controlled based on real-time events and conditions within the boiler. This could be an extremely cost-effective technology, which has the ability to be readily and easily adapted to virtually any pulverized coal fired boiler. Through unique on-line adaptive technology, Neural Network-based systems optimize the boiler operation by accommodating equipment performance changes due to wear and maintenance activities, adjusting to fluctuations in fuel quality, and improving operating flexibility. The system dynamically adjusts combustion setpoints and bias settings in closed-loop supervisory control to simultaneously reduce NO{sub x} emissions and improve heat rate

  7. Adolescents' risky decision-making activates neural networks related to social cognition and cognitive control processes.

    Science.gov (United States)

    Rodrigo, María José; Padrón, Iván; de Vega, Manuel; Ferstl, Evelyn C

    2014-01-01

    This study examines by means of functional magnetic resonance imaging the neural mechanisms underlying adolescents' risk decision-making in social contexts. We hypothesize that the social context could engage brain regions associated with social cognition processes and developmental changes are also expected. Sixty participants (adolescents: 17-18, and young adults: 21-22 years old) read narratives describing typical situations of decision-making in the presence of peers. They were asked to make choices in risky situations (e.g., taking or refusing a drug) or ambiguous situations (e.g., eating a hamburger or a hotdog). Risky as compared to ambiguous scenarios activated bilateral temporoparietal junction (TPJ), bilateral middle temporal gyrus (MTG), right medial prefrontal cortex, and the precuneus bilaterally; i.e., brain regions related to social cognition processes, such as self-reflection and theory of mind (ToM). In addition, brain structures related to cognitive control were active [right anterior cingulate cortex (ACC), bilateral dorsolateral prefrontal cortex (DLPFC), bilateral orbitofrontal cortex], whereas no significant clusters were obtained in the reward system (ventral striatum). Choosing the dangerous option involved a further activation of control areas (ACC) and emotional and social cognition areas (temporal pole). Adolescents employed more neural resources than young adults in the right DLPFC and the right TPJ in risk situations. When choosing the dangerous option, young adults showed a further engagement in ToM related regions (bilateral MTG) and in motor control regions related to the planning of actions (pre-supplementary motor area). Finally, the right insula and the right superior temporal gyrus were more activated in women than in men, suggesting more emotional involvement and more intensive modeling of the others' perspective in the risky conditions. These findings call for more comprehensive developmental accounts of decision-making in

  8. Adolescents’ risky decision-making activates neural networks related to social cognition and cognitive control processes

    Directory of Open Access Journals (Sweden)

    María José eRodrigo

    2014-02-01

    Full Text Available This study examines by means of fMRI the neural mechanisms underlying adolescents’ risk decision-making in social contexts. We hypothesize that the social context could engage brain regions associated with social cognition processes and developmental changes are also expected. Sixty participants (adolescents: 17-18, and young adults: 21-22 years old read narratives describing typical situations of decision-making in the presence of peers. They were asked to make choices in risky situations (e.g., taking or refusing a drug or ambiguous situations (e.g., eating a hamburger or a hotdog. Risky as compared to ambiguous scenarios activated bilateral temporoparietal junction (TPJ, bilateral middle temporal gyrus (MTG, right medial prefrontal cortex (mPFC, and the precuneus bilaterally; i.e., brain regions related to social cognition processes, such as self-reflection and theory of mind. In addition, brain structures related to cognitive control were active (right ACC, bilateral DLPFC, bilateral OFC, whereas no significant clusters were obtained in the reward system (VS. Choosing the dangerous option involved a further activation of control areas (ACC and emotional and social cognition areas (temporal pole. Adolescents employed more neural resources than young adults in the right DLPFC and the right TPJ in risk situations. When choosing the dangerous option, young adults showed a further engagement in theory of mind related regions (bilateral middle temporal gyrus and in motor control regions related to the planning of actions (pre-supplementary motor area. Finally, the right insula and the right superior temporal gyrus were more activated in women than in men, suggesting more emotional involvement and more intensive modeling of the others’ perspective in the risky conditions. These findings call for more comprehensive developmental accounts of decision-making in social contexts that incorporate the role of emotional and social cognition processes.

  9. Structure Crack Identification Based on Surface-mounted Active Sensor Network with Time-Domain Feature Extraction and Neural Network

    Directory of Open Access Journals (Sweden)

    Chunling DU

    2012-03-01

    Full Text Available In this work the condition of metallic structures are classified based on the acquired sensor data from a surface-mounted piezoelectric sensor/actuator network. The structures are aluminum plates with riveted holes and possible crack damage at these holes. A 400 kHz sine wave burst is used as diagnostic signals. The combination of time-domain S0 waves from received sensor signals is directly used as features and preprocessing is not needed for the dam age detection. Since the time sequence of the extracted S0 has a high dimension, principal component estimation is applied to reduce its dimension before entering NN (neural network training for classification. An LVQ (learning vector quantization NN is used to classify the conditions as healthy or damaged. A number of FEM (finite element modeling results are taken as inputs to the NN for training, since the simulated S0 waves agree well with the experimental results on real plates. The performance of the classification is then validated by using these testing results.

  10. Generalization performance of regularized neural network models

    DEFF Research Database (Denmark)

    Larsen, Jan; Hansen, Lars Kai

    1994-01-01

    Architecture optimization is a fundamental problem of neural network modeling. The optimal architecture is defined as the one which minimizes the generalization error. This paper addresses estimation of the generalization performance of regularized, complete neural network models. Regularization...

  11. voltage compensation using artificial neural network

    African Journals Online (AJOL)

    Offor Theophilos

    VOLTAGE COMPENSATION USING ARTIFICIAL NEURAL NETWORK: A CASE STUDY OF. RUMUOLA ... using artificial neural network (ANN) controller based dynamic voltage restorer (DVR). ... substation by simulating with sample of average voltage for Omerelu, Waterlines, Rumuola, Shell Industrial and Barracks.

  12. Plant Growth Models Using Artificial Neural Networks

    Science.gov (United States)

    Bubenheim, David

    1997-01-01

    In this paper, we descrive our motivation and approach to devloping models and the neural network architecture. Initial use of the artificial neural network for modeling the single plant process of transpiration is presented.

  13. Feature to prototype transition in neural networks

    Science.gov (United States)

    Krotov, Dmitry; Hopfield, John

    Models of associative memory with higher order (higher than quadratic) interactions, and their relationship to neural networks used in deep learning are discussed. Associative memory is conventionally described by recurrent neural networks with dynamical convergence to stable points. Deep learning typically uses feedforward neural nets without dynamics. However, a simple duality relates these two different views when applied to problems of pattern classification. From the perspective of associative memory such models deserve attention because they make it possible to store a much larger number of memories, compared to the quadratic case. In the dual description, these models correspond to feedforward neural networks with one hidden layer and unusual activation functions transmitting the activities of the visible neurons to the hidden layer. These activation functions are rectified polynomials of a higher degree rather than the rectified linear functions used in deep learning. The network learns representations of the data in terms of features for rectified linear functions, but as the power in the activation function is increased there is a gradual shift to a prototype-based representation, the two extreme regimes of pattern recognition known in cognitive psychology. Simons Center for Systems Biology.

  14. Neural networks and applications tutorial

    Science.gov (United States)

    Guyon, I.

    1991-09-01

    The importance of neural networks has grown dramatically during this decade. While only a few years ago they were primarily of academic interest, now dozens of companies and many universities are investigating the potential use of these systems and products are beginning to appear. The idea of building a machine whose architecture is inspired by that of the brain has roots which go far back in history. Nowadays, technological advances of computers and the availability of custom integrated circuits, permit simulations of hundreds or even thousands of neurons. In conjunction, the growing interest in learning machines, non-linear dynamics and parallel computation spurred renewed attention in artificial neural networks. Many tentative applications have been proposed, including decision systems (associative memories, classifiers, data compressors and optimizers), or parametric models for signal processing purposes (system identification, automatic control, noise canceling, etc.). While they do not always outperform standard methods, neural network approaches are already used in some real world applications for pattern recognition and signal processing tasks. The tutorial is divided into six lectures, that where presented at the Third Graduate Summer Course on Computational Physics (September 3-7, 1990) on Parallel Architectures and Applications, organized by the European Physical Society: (1) Introduction: machine learning and biological computation. (2) Adaptive artificial neurons (perceptron, ADALINE, sigmoid units, etc.): learning rules and implementations. (3) Neural network systems: architectures, learning algorithms. (4) Applications: pattern recognition, signal processing, etc. (5) Elements of learning theory: how to build networks which generalize. (6) A case study: a neural network for on-line recognition of handwritten alphanumeric characters.

  15. Optoelectronic Implementation of Neural Networks

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 3; Issue 9. Optoelectronic Implementation of Neural Networks - Use of Optics in Computing. R Ramachandran. General Article Volume 3 Issue 9 September 1998 pp 45-55. Fulltext. Click here to view fulltext PDF. Permanent link:

  16. Aphasia Classification Using Neural Networks

    DEFF Research Database (Denmark)

    Axer, H.; Jantzen, Jan; Berks, G.

    2000-01-01

    A web-based software model (http://fuzzy.iau.dtu.dk/aphasia.nsf) was developed as an example for classification of aphasia using neural networks. Two multilayer perceptrons were used to classify the type of aphasia (Broca, Wernicke, anomic, global) according to the results in some subtests...

  17. Analysis of neural networks through base functions

    NARCIS (Netherlands)

    van der Zwaag, B.J.; Slump, Cornelis H.; Spaanenburg, L.

    Problem statement. Despite their success-story, neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a "magic tool" but possibly even more

  18. Simplified LQG Control with Neural Networks

    DEFF Research Database (Denmark)

    Sørensen, O.

    1997-01-01

    A new neural network application for non-linear state control is described. One neural network is modelled to form a Kalmann predictor and trained to act as an optimal state observer for a non-linear process. Another neural network is modelled to form a state controller and trained to produce...

  19. Novel quantum inspired binary neural network algorithm

    Indian Academy of Sciences (India)

    In this paper, a quantum based binary neural network algorithm is proposed, named as novel quantum binary neural network algorithm (NQ-BNN). It forms a neural network structure by deciding weights and separability parameter in quantum based manner. Quantum computing concept represents solution probabilistically ...

  20. Implementing Signature Neural Networks with Spiking Neurons.

    Science.gov (United States)

    Carrillo-Medina, José Luis; Latorre, Roberto

    2016-01-01

    Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm-i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data-to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the absence

  1. Implementing Signature Neural Networks with Spiking Neurons

    Science.gov (United States)

    Carrillo-Medina, José Luis; Latorre, Roberto

    2016-01-01

    Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm—i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data—to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the

  2. Fluorescent probes as a tool for cell population tracking in spontaneously active neural networks derived from human pluripotent stem cells.

    Science.gov (United States)

    Mäkinen, M; Joki, T; Ylä-Outinen, L; Skottman, H; Narkilahti, S; Aänismaa, R

    2013-04-30

    Applications such as 3D cultures and tissue modelling require cell tracking with non-invasive methods. In this work, the suitability of two fluorescent probes, CellTracker, CT, and long chain carbocyanine dye, DiD, was investigated for long-term culturing of labeled human pluripotent stem cell-derived neural cells. We found that these dyes did not affect the cell viability. However, proliferation was decreased in DiD labeled cell population. With both dyes the labeling was stable up to 4 weeks. CT and DiD labeled cells could be co-cultured and, importantly, these mixed populations had their normal ability to form spontaneous electrical network activity. In conclusion, human neural cells can be successfully labeled with these two fluorescent probes without significantly affecting the cell characteristics. These labeled cells could be utilized further in e.g. building controlled neuronal networks for neurotoxicity screening platforms, combining cells with biomaterials for 3D studies, and graft development. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Analysis of short single rest/activation epoch fMRI by self-organizing map neural network

    Science.gov (United States)

    Erberich, Stephan G.; Dietrich, Thomas; Kemeny, Stefan; Krings, Timo; Willmes, Klaus; Thron, Armin; Oberschelp, Walter

    2000-04-01

    Functional magnet resonance imaging (fMRI) has become a standard non invasive brain imaging technique delivering high spatial resolution. Brain activation is determined by magnetic susceptibility of the blood oxygen level (BOLD effect) during an activation task, e.g. motor, auditory and visual tasks. Usually box-car paradigms have 2 - 4 rest/activation epochs with at least an overall of 50 volumes per scan in the time domain. Statistical test based analysis methods need a large amount of repetitively acquired brain volumes to gain statistical power, like Student's t-test. The introduced technique based on a self-organizing neural network (SOM) makes use of the intrinsic features of the condition change between rest and activation epoch and demonstrated to differentiate between the conditions with less time points having only one rest and one activation epoch. The method reduces scan and analysis time and the probability of possible motion artifacts from the relaxation of the patients head. Functional magnet resonance imaging (fMRI) of patients for pre-surgical evaluation and volunteers were acquired with motor (hand clenching and finger tapping), sensory (ice application), auditory (phonological and semantic word recognition task) and visual paradigms (mental rotation). For imaging we used different BOLD contrast sensitive Gradient Echo Planar Imaging (GE-EPI) single-shot pulse sequences (TR 2000 and 4000, 64 X 64 and 128 X 128, 15 - 40 slices) on a Philips Gyroscan NT 1.5 Tesla MR imager. All paradigms were RARARA (R equals rest, A equals activation) with an epoch width of 11 time points each. We used the self-organizing neural network implementation described by T. Kohonen with a 4 X 2 2D neuron map. The presented time course vectors were clustered by similar features in the 2D neuron map. Three neural networks were trained and used for labeling with the time course vectors of one, two and all three on/off epochs. The results were also compared by using a

  4. Energy coding in neural network with inhibitory neurons.

    Science.gov (United States)

    Wang, Ziyin; Wang, Rubin; Fang, Ruiyan

    2015-04-01

    This paper aimed at assessing and comparing the effects of the inhibitory neurons in the neural network on the neural energy distribution, and the network activities in the absence of the inhibitory neurons to understand the nature of neural energy distribution and neural energy coding. Stimulus, synchronous oscillation has significant difference between neural networks with and without inhibitory neurons, and this difference can be quantitatively evaluated by the characteristic energy distribution. In addition, the synchronous oscillation difference of the neural activity can be quantitatively described by change of the energy distribution if the network parameters are gradually adjusted. Compared with traditional method of correlation coefficient analysis, the quantitative indicators based on nervous energy distribution characteristics are more effective in reflecting the dynamic features of the neural network activities. Meanwhile, this neural coding method from a global perspective of neural activity effectively avoids the current defects of neural encoding and decoding theory and enormous difficulties encountered. Our studies have shown that neural energy coding is a new coding theory with high efficiency and great potential.

  5. Altered Synchronizations among Neural Networks in Geriatric Depression.

    Science.gov (United States)

    Wang, Lihong; Chou, Ying-Hui; Potter, Guy G; Steffens, David C

    2015-01-01

    Although major depression has been considered as a manifestation of discoordinated activity between affective and cognitive neural networks, only a few studies have examined the relationships among neural networks directly. Because of the known disconnection theory, geriatric depression could be a useful model in studying the interactions among different networks. In the present study, using independent component analysis to identify intrinsically connected neural networks, we investigated the alterations in synchronizations among neural networks in geriatric depression to better understand the underlying neural mechanisms. Resting-state fMRI data was collected from thirty-two patients with geriatric depression and thirty-two age-matched never-depressed controls. We compared the resting-state activities between the two groups in the default-mode, central executive, attention, salience, and affective networks as well as correlations among these networks. The depression group showed stronger activity than the controls in an affective network, specifically within the orbitofrontal region. However, unlike the never-depressed controls, geriatric depression group lacked synchronized/antisynchronized activity between the affective network and the other networks. Those depressed patients with lower executive function has greater synchronization between the salience network with the executive and affective networks. Our results demonstrate the effectiveness of the between-network analyses in examining neural models for geriatric depression.

  6. Dynamic properties of cellular neural networks

    Directory of Open Access Journals (Sweden)

    Angela Slavova

    1993-01-01

    Full Text Available Dynamic behavior of a new class of information-processing systems called Cellular Neural Networks is investigated. In this paper we introduce a small parameter in the state equation of a cellular neural network and we seek for periodic phenomena. New approach is used for proving stability of a cellular neural network by constructing Lyapunov's majorizing equations. This algorithm is helpful for finding a map from initial continuous state space of a cellular neural network into discrete output. A comparison between cellular neural networks and cellular automata is made.

  7. Genetic neural network modeling of the selective inhibition of the intermediate-conductance Ca2+-activated K+ channel by some triarylmethanes using topological charge indexes descriptors

    Science.gov (United States)

    Caballero, Julio; Garriga, Miguel; Fernández, Michael

    2005-11-01

    Selective inhibition of the intermediate-conductance Ca2+-activated K+ channel ( IK Ca) by some clotrimazole analogs has been successfully modeled using topological charge indexes (TCI) and genetic neural networks (GNNs). A neural network monitoring scheme evidenced a highly non-linear dependence between the IK Ca blocking activity and TCI descriptors. Suitable subsets of descriptors were selected by means of genetic algorithm. Bayesian regularization was implemented in the network training function with the aim of assuring good generalization qualities to the predictors. GNNs were able to yield a reliable predictor that explained about 97% data variance with good predictive ability. On the contrary, the best multivariate linear equation with descriptors selected by linear genetic search, only explained about 60%. In spite of when using the descriptors from the linear equations to train neural networks yielded higher fitted models, such networks were very unstable and had relative low predictive ability. However, the best GNN BRANN 2 had a Q 2 of LOO of cross-validation equal to 0.901 and at the same time exhibited outstanding stability when calculating 80 randomly constructed training/test sets partitions. Our model suggested that structural fragments of size three and seven have relevant influence on the inhibitory potency of the studied IK Ca channel blockers. Furthermore, inhibitors were well distributed regarding its activity levels in a Kohonen self-organizing map (KSOM) built using the inputs of the best neural network predictor.

  8. An efficient neural network approach to dynamic robot motion planning.

    Science.gov (United States)

    Yang, S X; Meng, M

    2000-03-01

    In this paper, a biologically inspired neural network approach to real-time collision-free motion planning of mobile robots or robot manipulators in a nonstationary environment is proposed. Each neuron in the topologically organized neural network has only local connections, whose neural dynamics is characterized by a shunting equation. Thus the computational complexity linearly depends on the neural network size. The real-time robot motion is planned through the dynamic activity landscape of the neural network without any prior knowledge of the dynamic environment, without explicitly searching over the free workspace or the collision paths, and without any learning procedures. Therefore it is computationally efficient. The global stability of the neural network is guaranteed by qualitative analysis and the Lyapunov stability theory. The effectiveness and efficiency of the proposed approach are demonstrated through simulation studies.

  9. The harmonics detection method based on neural network applied ...

    African Journals Online (AJOL)

    user

    Consequently, many structures based on artificial neural network (ANN) have been developed in the literature, The most significant ... Keywords: Artificial Neural Networks (ANN), p-q theory, (SAPF), Harmonics, Total Harmonic Distortion. 1. ..... and pure shunt active fitters, IEEE 38th Conf on Industry Applications, Vol. 2, pp.

  10. Neural Networks Methodology and Applications

    CERN Document Server

    Dreyfus, Gérard

    2005-01-01

    Neural networks represent a powerful data processing technique that has reached maturity and broad application. When clearly understood and appropriately used, they are a mandatory component in the toolbox of any engineer who wants make the best use of the available data, in order to build models, make predictions, mine data, recognize shapes or signals, etc. Ranging from theoretical foundations to real-life applications, this book is intended to provide engineers and researchers with clear methodologies for taking advantage of neural networks in industrial, financial or banking applications, many instances of which are presented in the book. For the benefit of readers wishing to gain deeper knowledge of the topics, the book features appendices that provide theoretical details for greater insight, and algorithmic details for efficient programming and implementation. The chapters have been written by experts ands seemlessly edited to present a coherent and comprehensive, yet not redundant, practically-oriented...

  11. Applying Artificial Neural Networks for Face Recognition

    Directory of Open Access Journals (Sweden)

    Thai Hoang Le

    2011-01-01

    Full Text Available This paper introduces some novel models for all steps of a face recognition system. In the step of face detection, we propose a hybrid model combining AdaBoost and Artificial Neural Network (ABANN to solve the process efficiently. In the next step, labeled faces detected by ABANN will be aligned by Active Shape Model and Multi Layer Perceptron. In this alignment step, we propose a new 2D local texture model based on Multi Layer Perceptron. The classifier of the model significantly improves the accuracy and the robustness of local searching on faces with expression variation and ambiguous contours. In the feature extraction step, we describe a methodology for improving the efficiency by the association of two methods: geometric feature based method and Independent Component Analysis method. In the face matching step, we apply a model combining many Neural Networks for matching geometric features of human face. The model links many Neural Networks together, so we call it Multi Artificial Neural Network. MIT + CMU database is used for evaluating our proposed methods for face detection and alignment. Finally, the experimental results of all steps on CallTech database show the feasibility of our proposed model.

  12. The LILARTI neural network system

    Energy Technology Data Exchange (ETDEWEB)

    Allen, J.D. Jr.; Schell, F.M.; Dodd, C.V.

    1992-10-01

    The material of this Technical Memorandum is intended to provide the reader with conceptual and technical background information on the LILARTI neural network system of detail sufficient to confer an understanding of the LILARTI method as it is presently allied and to facilitate application of the method to problems beyond the scope of this document. Of particular importance in this regard are the descriptive sections and the Appendices which include operating instructions, partial listings of program output and data files, and network construction information.

  13. Study of the possibility of determining mass flow of water from neutron activation measurements with flow simulations and neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Linden, P.; Pazsit, I. [Department of Reactor Physics, Chalmers University of Technology, Goeteborg (Sweden)

    1998-08-01

    Mass flow of water in a pipe can be measured in a non-intrusive way by the pulsed neutron activation (PNA) technique. From such measurements, mass flow can be estimated by various techniques of time averaging, performed on the time-resolved detector signal(s). However, time averaging methods have a few percent systematic error, which, in addition, is not a constant but varies with flow and measurement parameters. Achieving a precision better than 1% from PNA measurements is a hitherto unsolved task. In this paper a methodology is suggested to solve this task and is tested by simulation methods. The method is based on the use of artificial neural networks to determine mass flow rate from the time resolved detector signal. To achieve this, the network needs to be trained on a large number of real detector data. It is suggested that these data should be obtained by advanced numerical simulation of the PNA measurement. In this paper we use a simplified simulation model for a feasibility study of the methodology. It is shown that a neural network is capable to determine the mass flow rate with a precision of about 0.5%. (orig.) [Deutsch] Das Stroemungsverhalten von Wasser in einer Leitung kann nicht-invasiv durch gepulste Neutronenaktivierungsverfahren (PNA) gemessen werden. Durch solche Messungen kann das Stroemungsverhalten mit Hilfe verschiedener Techniken der Mittelung ueber zeitabhaengige Detektorsignale bestimmt werden. Der systematische Fehler von Zeitmittelungsmethoden liegt jedoch bei ein paar Prozent und ist zusaetzlich nicht konstant, sondern variiert mit den Stroemungs- und Messparametern. Das Erreichen einer Genauigkeit von besser als 1% ist also bei Neutronenaktivierungsverfahren ein bisher ungeloestes Problem. In der vorliegenden Arbeit wird eine Methode zur Loesung dieser Aufgabe vorgeschlagen und mit Hilfe von Simulationsverfahren getestet. Die Methode basiert auf der Verwendung von kuenstlichen, neuralen Netzwerken zur Bestimmung des

  14. Learning the Relationship between the Primary Structure of HIV Envelope Glycoproteins and Neutralization Activity of Particular Antibodies by Using Artificial Neural Networks

    Science.gov (United States)

    Buiu, Cătălin; Putz, Mihai V.; Avram, Speranta

    2016-01-01

    The dependency between the primary structure of HIV envelope glycoproteins (ENV) and the neutralization data for given antibodies is very complicated and depends on a large number of factors, such as the binding affinity of a given antibody for a given ENV protein, and the intrinsic infection kinetics of the viral strain. This paper presents a first approach to learning these dependencies using an artificial feedforward neural network which is trained to learn from experimental data. The results presented here demonstrate that the trained neural network is able to generalize on new viral strains and to predict reliable values of neutralizing activities of given antibodies against HIV-1. PMID:27727189

  15. Multistability of memristive Cohen-Grossberg neural networks with non-monotonic piecewise linear activation functions and time-varying delays.

    Science.gov (United States)

    Nie, Xiaobing; Zheng, Wei Xing; Cao, Jinde

    2015-11-01

    The problem of coexistence and dynamical behaviors of multiple equilibrium points is addressed for a class of memristive Cohen-Grossberg neural networks with non-monotonic piecewise linear activation functions and time-varying delays. By virtue of the fixed point theorem, nonsmooth analysis theory and other analytical tools, some sufficient conditions are established to guarantee that such n-dimensional memristive Cohen-Grossberg neural networks can have 5(n) equilibrium points, among which 3(n) equilibrium points are locally exponentially stable. It is shown that greater storage capacity can be achieved by neural networks with the non-monotonic activation functions introduced herein than the ones with Mexican-hat-type activation function. In addition, unlike most existing multistability results of neural networks with monotonic activation functions, those obtained 3(n) locally stable equilibrium points are located both in saturated regions and unsaturated regions. The theoretical findings are verified by an illustrative example with computer simulations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Practical neural network recipies in C++

    CERN Document Server

    Masters

    2014-01-01

    This text serves as a cookbook for neural network solutions to practical problems using C++. It will enable those with moderate programming experience to select a neural network model appropriate to solving a particular problem, and to produce a working program implementing that network. The book provides guidance along the entire problem-solving path, including designing the training set, preprocessing variables, training and validating the network, and evaluating its performance. Though the book is not intended as a general course in neural networks, no background in neural works is assum

  17. Effect of Activation Function and Post Synaptic Potential on Response of Artificial Neural Network to Predict Frictional Resistance of Aluminium Alloy Sheets

    Science.gov (United States)

    Trzepiecinski, T.; Lemu, H. G.

    2017-11-01

    Many technological factors affect the friction phenomenon in sheet metal forming process. As a result, the determination of the analytical model describing the frictional resistance is very difficult. In this paper, a friction model was built based on the experimental results of strip drawing tests. Friction tests were carried out in order to determine the effect of surface and tool roughness parameters, the pressure force and mechanical parameters of the sheets on the value of coefficient of friction. The strip drawing friction tests were conducted on aluminium alloy sheets: AA5251-H14, AA5754-H14, AA5754-H18, AA5754-H24. The surface topography of the sheets was measured using Taylor Hobson Surtronic 3+ instrument. In order to describe complex relations between friction and factors influencing tribological conditions of sheet metal forming, the multilayer artificial network was built in Statistica Neural Network program. The effect of activation function and post synaptic potential function on the sensitivity of multilayer neural network to predict the friction coefficient value is presented. It has been found that the difference in the prediction of error of neural network for different approaches can reach 400%. So, the proper selection of activation and post synaptic potential functions is crucial in neural network modelling.

  18. Neural network modeling of emotion

    Science.gov (United States)

    Levine, Daniel S.

    2007-03-01

    This article reviews the history and development of computational neural network modeling of cognitive and behavioral processes that involve emotion. The exposition starts with models of classical conditioning dating from the early 1970s. Then it proceeds toward models of interactions between emotion and attention. Then models of emotional influences on decision making are reviewed, including some speculative (not and not yet simulated) models of the evolution of decision rules. Through the late 1980s, the neural networks developed to model emotional processes were mainly embodiments of significant functional principles motivated by psychological data. In the last two decades, network models of these processes have become much more detailed in their incorporation of known physiological properties of specific brain regions, while preserving many of the psychological principles from the earlier models. Most network models of emotional processes so far have dealt with positive and negative emotion in general, rather than specific emotions such as fear, joy, sadness, and anger. But a later section of this article reviews a few models relevant to specific emotions: one family of models of auditory fear conditioning in rats, and one model of induced pleasure enhancing creativity in humans. Then models of emotional disorders are reviewed. The article concludes with philosophical statements about the essential contributions of emotion to intelligent behavior and the importance of quantitative theories and models to the interdisciplinary enterprise of understanding the interactions of emotion, cognition, and behavior.

  19. MEMBRAIN NEURAL NETWORK FOR VISUAL PATTERN RECOGNITION

    Directory of Open Access Journals (Sweden)

    Artur Popko

    2013-06-01

    Full Text Available Recognition of visual patterns is one of significant applications of Artificial Neural Networks, which partially emulate human thinking in the domain of artificial intelligence. In the paper, a simplified neural approach to recognition of visual patterns is portrayed and discussed. This paper is dedicated for investigators in visual patterns recognition, Artificial Neural Networking and related disciplines. The document describes also MemBrain application environment as a powerful and easy to use neural networks’ editor and simulator supporting ANN.

  20. Tumor Diagnosis Using Backpropagation Neural Network Method

    Science.gov (United States)

    Ma, Lixing; Looney, Carl; Sukuta, Sydney; Bruch, Reinhard; Afanasyeva, Natalia

    1998-05-01

    For characterization of skin cancer, an artificial neural network (ANN) method has been developed to diagnose normal tissue, benign tumor and melanoma. The pattern recognition is based on a three-layer neural network fuzzy learning system. In this study, the input neuron data set is the Fourier Transform infrared (FT-IR)spectrum obtained by a new Fiberoptic Evanescent Wave Fourier Transform Infrared (FEW-FTIR) spectroscopy method in the range of 1480 to 1850 cm-1. Ten input features are extracted from the absorbency values in this region. A single hidden layer of neural nodes with sigmoids activation functions clusters the feature space into small subclasses and the output nodes are separated in different nonconvex classes to permit nonlinear discrimination of disease states. The output is classified as three classes: normal tissue, benign tumor and melanoma. The results obtained from the neural network pattern recognition are shown to be consistent with traditional medical diagnosis. Input features have also been extracted from the absorbency spectra using chemical factor analysis. These abstract features or factors are also used in the classification.

  1. QSAR of antitrypanosomal activities of polyphenols and their analogues using multiple linear regression and artificial neural networks.

    Science.gov (United States)

    Rastija, Vesna; Masand, Vijay H

    2014-01-01

    In order to find a thriving quantitative structure-activity relationship for antitrypanosomal activities (against Trypanosma brucei rhodesiense) of polyphenols that belong to different structural groups, multiple linear regression (MLR) and artificial neural networks (ANN) were employed. The analysis was performed on two different-sized training sets (59% and 78% molecules in the training set), resulting in relatively successful MLR and ANN models for the data set containing the smaller training set. The best MLR model obtained using the five descriptors (R3m(+), GAP, DISPv, HATS2m, JGI2) was able to account only for 74% of the variance of antitrypanosomal activities of the training set and achieved a high internal, but low external prediction. Nonlinearities of the best ANN model compared with the linear model improved the coefficient of determination to 98.6%, and showed a better external predictive ability. The obtained models displayed relevance of the distance between oxygen atoms in molecules of polyphenols, as well as stability of molecules, measured by the difference between the energy of the highest occupied molecular orbital and the energy of the lowest unoccupied molecular orbital (GAP) for their activity.

  2. Neural network modelling of antifungal activity of a series of oxazole derivatives based on in silico pharmacokinetic parameters

    Directory of Open Access Journals (Sweden)

    Kovačević Strahinja Z.

    2013-01-01

    Full Text Available In the present paper, the antifungal activity of a series of benzoxazole and oxazolo[ 4,5-b]pyridine derivatives was evaluated against Candida albicans by using quantitative structure-activity relationships chemometric methodology with artificial neural network (ANN regression approach. In vitro antifungal activity of the tested compounds was presented by minimum inhibitory concentration expressed as log(1/cMIC. In silico pharmacokinetic parameters related to absorption, distribution, metabolism and excretion (ADME were calculated for all studied compounds by using PreADMET software. A feedforward back-propagation ANN with gradient descent learning algorithm was applied for modelling of the relationship between ADME descriptors (blood-brain barrier penetration, plasma protein binding, Madin-Darby cell permeability and Caco-2 cell permeability and experimental log(1/cMIC values. A 4-6-1 ANN was developed with the optimum momentum and learning rates of 0.3 and 0.05, respectively. An excellent correlation between experimental antifungal activity and values predicted by the ANN was obtained with a correlation coefficient of 0.9536. [Projekat Ministarstva nauke Republike Srbije, br. 172012 i br. 172014

  3. Satellite image analysis using neural networks

    Science.gov (United States)

    Sheldon, Roger A.

    1990-01-01

    The tremendous backlog of unanalyzed satellite data necessitates the development of improved methods for data cataloging and analysis. Ford Aerospace has developed an image analysis system, SIANN (Satellite Image Analysis using Neural Networks) that integrates the technologies necessary to satisfy NASA's science data analysis requirements for the next generation of satellites. SIANN will enable scientists to train a neural network to recognize image data containing scenes of interest and then rapidly search data archives for all such images. The approach combines conventional image processing technology with recent advances in neural networks to provide improved classification capabilities. SIANN allows users to proceed through a four step process of image classification: filtering and enhancement, creation of neural network training data via application of feature extraction algorithms, configuring and training a neural network model, and classification of images by application of the trained neural network. A prototype experimentation testbed was completed and applied to climatological data.

  4. Fuzzy neural networks: theory and applications

    Science.gov (United States)

    Gupta, Madan M.

    1994-10-01

    During recent years, significant advances have been made in two distinct technological areas: fuzzy logic and computational neural networks. The theory of fuzzy logic provides a mathematical framework to capture the uncertainties associated with human cognitive processes, such as thinking and reasoning. It also provides a mathematical morphology to emulate certain perceptual and linguistic attributes associated with human cognition. On the other hand, the computational neural network paradigms have evolved in the process of understanding the incredible learning and adaptive features of neuronal mechanisms inherent in certain biological species. Computational neural networks replicate, on a small scale, some of the computational operations observed in biological learning and adaptation. The integration of these two fields, fuzzy logic and neural networks, have given birth to an emerging technological field -- fuzzy neural networks. Fuzzy neural networks, have the potential to capture the benefits of these two fascinating fields, fuzzy logic and neural networks, into a single framework. The intent of this tutorial paper is to describe the basic notions of biological and computational neuronal morphologies, and to describe the principles and architectures of fuzzy neural networks. Towards this goal, we develop a fuzzy neural architecture based upon the notion of T-norm and T-conorm connectives. An error-based learning scheme is described for this neural structure.

  5. Pediatric Nutritional Requirements Determination with Neural Networks

    OpenAIRE

    Karlık, Bekir; Ece, Aydın

    1998-01-01

    To calculate daily nutritional requirements of children, a computer program has been developed based upon neural network. Three parameters, daily protein, energy and water requirements, were calculated through trained artificial neural networks using a database of 312 children The results were compared with those of calculated from dietary requirements tables of World Health Organisation. No significant difference was found between two calculations. In conclusion, a simple neural network may ...

  6. Adaptive optimization and control using neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Mead, W.C.; Brown, S.K.; Jones, R.D.; Bowling, P.S.; Barnes, C.W.

    1993-10-22

    Recent work has demonstrated the ability of neural-network-based controllers to optimize and control machines with complex, non-linear, relatively unknown control spaces. We present a brief overview of neural networks via a taxonomy illustrating some capabilities of different kinds of neural networks. We present some successful control examples, particularly the optimization and control of a small-angle negative ion source.

  7. Neural networks for nuclear spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Keller, P.E.; Kangas, L.J.; Hashem, S.; Kouzes, R.T. [Pacific Northwest Lab., Richland, WA (United States)] [and others

    1995-12-31

    In this paper two applications of artificial neural networks (ANNs) in nuclear spectroscopy analysis are discussed. In the first application, an ANN assigns quality coefficients to alpha particle energy spectra. These spectra are used to detect plutonium contamination in the work environment. The quality coefficients represent the levels of spectral degradation caused by miscalibration and foreign matter affecting the instruments. A set of spectra was labeled with quality coefficients by an expert and used to train the ANN expert system. Our investigation shows that the expert knowledge of spectral quality can be transferred to an ANN system. The second application combines a portable gamma-ray spectrometer with an ANN. In this system the ANN is used to automatically identify, radioactive isotopes in real-time from their gamma-ray spectra. Two neural network paradigms are examined: the linear perception and the optimal linear associative memory (OLAM). A comparison of the two paradigms shows that OLAM is superior to linear perception for this application. Both networks have a linear response and are useful in determining the composition of an unknown sample when the spectrum of the unknown is a linear superposition of known spectra. One feature of this technique is that it uses the whole spectrum in the identification process instead of only the individual photo-peaks. For this reason, it is potentially more useful for processing data from lower resolution gamma-ray spectrometers. This approach has been tested with data generated by Monte Carlo simulations and with field data from sodium iodide and Germanium detectors. With the ANN approach, the intense computation takes place during the training process. Once the network is trained, normal operation consists of propagating the data through the network, which results in rapid identification of samples. This approach is useful in situations that require fast response where precise quantification is less important.

  8. Neural network based system for equipment surveillance

    Science.gov (United States)

    Vilim, R.B.; Gross, K.C.; Wegerich, S.W.

    1998-04-28

    A method and system are disclosed for performing surveillance of transient signals of an industrial device to ascertain the operating state. The method and system involves the steps of reading into a memory training data, determining neural network weighting values until achieving target outputs close to the neural network output. If the target outputs are inadequate, wavelet parameters are determined to yield neural network outputs close to the desired set of target outputs and then providing signals characteristic of an industrial process and comparing the neural network output to the industrial process signals to evaluate the operating state of the industrial process. 33 figs.

  9. Fuzzy neural network theory and application

    CERN Document Server

    Liu, Puyin

    2004-01-01

    This book systematically synthesizes research achievements in the field of fuzzy neural networks in recent years. It also provides a comprehensive presentation of the developments in fuzzy neural networks, with regard to theory as well as their application to system modeling and image restoration. Special emphasis is placed on the fundamental concepts and architecture analysis of fuzzy neural networks. The book is unique in treating all kinds of fuzzy neural networks and their learning algorithms and universal approximations, and employing simulation examples which are carefully designed to he

  10. Pansharpening by Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Giuseppe Masi

    2016-07-01

    Full Text Available A new pansharpening method is proposed, based on convolutional neural networks. We adapt a simple and effective three-layer architecture recently proposed for super-resolution to the pansharpening problem. Moreover, to improve performance without increasing complexity, we augment the input by including several maps of nonlinear radiometric indices typical of remote sensing. Experiments on three representative datasets show the proposed method to provide very promising results, largely competitive with the current state of the art in terms of both full-reference and no-reference metrics, and also at a visual inspection.

  11. Neural networks and perceptual learning

    Science.gov (United States)

    Tsodyks, Misha; Gilbert, Charles

    2005-01-01

    Sensory perception is a learned trait. The brain strategies we use to perceive the world are constantly modified by experience. With practice, we subconsciously become better at identifying familiar objects or distinguishing fine details in our environment. Current theoretical models simulate some properties of perceptual learning, but neglect the underlying cortical circuits. Future neural network models must incorporate the top-down alteration of cortical function by expectation or perceptual tasks. These newly found dynamic processes are challenging earlier views of static and feedforward processing of sensory information. PMID:15483598

  12. Optimization with Potts Neural Networks

    Science.gov (United States)

    Söderberg, Bo

    The Potts Neural Network approach to non-binary discrete optimization problems is described. It applies to problems that can be described as a set of elementary `multiple choice' options. Instead of the conventional binary (Ising) neurons, mean field Potts neurons, having several available states, are used to describe the elementary degrees of freedom of such problems. The dynamics consists of iterating the mean field equations with annealing until convergence. Due to its deterministic character, the method is quite fast. When applied to problems of Graph Partition and scheduling types, it produces very good solutions also for problems of considerable size.

  13. Character Recognition Using Genetically Trained Neural Networks

    Energy Technology Data Exchange (ETDEWEB)

    Diniz, C.; Stantz, K.M.; Trahan, M.W.; Wagner, J.S.

    1998-10-01

    Computationally intelligent recognition of characters and symbols addresses a wide range of applications including foreign language translation and chemical formula identification. The combination of intelligent learning and optimization algorithms with layered neural structures offers powerful techniques for character recognition. These techniques were originally developed by Sandia National Laboratories for pattern and spectral analysis; however, their ability to optimize vast amounts of data make them ideal for character recognition. An adaptation of the Neural Network Designer soflsvare allows the user to create a neural network (NN_) trained by a genetic algorithm (GA) that correctly identifies multiple distinct characters. The initial successfid recognition of standard capital letters can be expanded to include chemical and mathematical symbols and alphabets of foreign languages, especially Arabic and Chinese. The FIN model constructed for this project uses a three layer feed-forward architecture. To facilitate the input of characters and symbols, a graphic user interface (GUI) has been developed to convert the traditional representation of each character or symbol to a bitmap. The 8 x 8 bitmap representations used for these tests are mapped onto the input nodes of the feed-forward neural network (FFNN) in a one-to-one correspondence. The input nodes feed forward into a hidden layer, and the hidden layer feeds into five output nodes correlated to possible character outcomes. During the training period the GA optimizes the weights of the NN until it can successfully recognize distinct characters. Systematic deviations from the base design test the network's range of applicability. Increasing capacity, the number of letters to be recognized, requires a nonlinear increase in the number of hidden layer neurodes. Optimal character recognition performance necessitates a minimum threshold for the number of cases when genetically training the net. And, the

  14. Analysis of the internal representations developed by neural networks for structures applied to quantitative structure--activity relationship studies of benzodiazepines.

    Science.gov (United States)

    Micheli, A; Sperduti, A; Starita, A; Bianucci, A M

    2001-01-01

    An application of recursive cascade correlation (CC) neural networks to quantitative structure-activity relationship (QSAR) studies is presented, with emphasis on the study of the internal representations developed by the neural networks. Recursive CC is a neural network model recently proposed for the processing of structured data. It allows the direct handling of chemical compounds as labeled ordered directed graphs, and constitutes a novel approach to QSAR. The adopted representation of molecular structure captures, in a quite general and flexible way, significant topological aspects and chemical functionalities for each specific class of molecules showing a particular chemical reactivity or biological activity. A class of 1,4-benzodiazepin-2-ones is analyzed by the proposed approach. It compares favorably versus the traditional QSAR treatment based on equations. To show the ability of the model in capturing most of the structural features that account for the biological activity, the internal representations developed by the networks are analyzed by principal component analysis. This analysis shows that the networks are able to discover relevant structural features just on the basis of the association between the molecular morphology and the target property (affinity).

  15. A neural network model for texture discrimination.

    Science.gov (United States)

    Xing, J; Gerstein, G L

    1993-01-01

    A model of texture discrimination in visual cortex was built using a feedforward network with lateral interactions among relatively realistic spiking neural elements. The elements have various membrane currents, equilibrium potentials and time constants, with action potentials and synapses. The model is derived from the modified programs of MacGregor (1987). Gabor-like filters are applied to overlapping regions in the original image; the neural network with lateral excitatory and inhibitory interactions then compares and adjusts the Gabor amplitudes in order to produce the actual texture discrimination. Finally, a combination layer selects and groups various representations in the output of the network to form the final transformed image material. We show that both texture segmentation and detection of texture boundaries can be represented in the firing activity of such a network for a wide variety of synthetic to natural images. Performance details depend most strongly on the global balance of strengths of the excitatory and inhibitory lateral interconnections. The spatial distribution of lateral connective strengths has relatively little effect. Detailed temporal firing activities of single elements in the lateral connected network were examined under various stimulus conditions. Results show (as in area 17 of cortex) that a single element's response to image features local to its receptive field can be altered by changes in the global context.

  16. Three dimensional living neural networks

    Science.gov (United States)

    Linnenberger, Anna; McLeod, Robert R.; Basta, Tamara; Stowell, Michael H. B.

    2015-08-01

    We investigate holographic optical tweezing combined with step-and-repeat maskless projection micro-stereolithography for fine control of 3D positioning of living cells within a 3D microstructured hydrogel grid. Samples were fabricated using three different cell lines; PC12, NT2/D1 and iPSC. PC12 cells are a rat cell line capable of differentiation into neuron-like cells NT2/D1 cells are a human cell line that exhibit biochemical and developmental properties similar to that of an early embryo and when exposed to retinoic acid the cells differentiate into human neurons useful for studies of human neurological disease. Finally induced pluripotent stem cells (iPSC) were utilized with the goal of future studies of neural networks fabricated from human iPSC derived neurons. Cells are positioned in the monomer solution with holographic optical tweezers at 1064 nm and then are encapsulated by photopolymerization of polyethylene glycol (PEG) hydrogels formed by thiol-ene photo-click chemistry via projection of a 512x512 spatial light modulator (SLM) illuminated at 405 nm. Fabricated samples are incubated in differentiation media such that cells cease to divide and begin to form axons or axon-like structures. By controlling the position of the cells within the encapsulating hydrogel structure the formation of the neural circuits is controlled. The samples fabricated with this system are a useful model for future studies of neural circuit formation, neurological disease, cellular communication, plasticity, and repair mechanisms.

  17. The Laplacian spectrum of neural networks

    Science.gov (United States)

    de Lange, Siemon C.; de Reus, Marcel A.; van den Heuvel, Martijn P.

    2014-01-01

    The brain is a complex network of neural interactions, both at the microscopic and macroscopic level. Graph theory is well suited to examine the global network architecture of these neural networks. Many popular graph metrics, however, encode average properties of individual network elements. Complementing these “conventional” graph metrics, the eigenvalue spectrum of the normalized Laplacian describes a network's structure directly at a systems level, without referring to individual nodes or connections. In this paper, the Laplacian spectra of the macroscopic anatomical neuronal networks of the macaque and cat, and the microscopic network of the Caenorhabditis elegans were examined. Consistent with conventional graph metrics, analysis of the Laplacian spectra revealed an integrative community structure in neural brain networks. Extending previous findings of overlap of network attributes across species, similarity of the Laplacian spectra across the cat, macaque and C. elegans neural networks suggests a certain level of consistency in the overall architecture of the anatomical neural networks of these species. Our results further suggest a specific network class for neural networks, distinct from conceptual small-world and scale-free models as well as several empirical networks. PMID:24454286

  18. Reject mechanisms for massively parallel neural network character recognition systems

    Science.gov (United States)

    Garris, Michael D.; Wilson, Charles L.

    1992-12-01

    Two reject mechanisms are compared using a massively parallel character recognition system implemented at NIST. The recognition system was designed to study the feasibility of automatically recognizing hand-printed text in a loosely constrained environment. The first method is a simple scalar threshold on the output activation of the winning neurode from the character classifier network. The second method uses an additional neural network trained on all outputs from the character classifier network to accept or reject assigned classifications. The neural network rejection method was expected to perform with greater accuracy than the scalar threshold method, but this was not supported by the test results presented. The scalar threshold method, even though arbitrary, is shown to be a viable reject mechanism for use with neural network character classifiers. Upon studying the performance of the neural network rejection method, analyses show that the two neural networks, the character classifier network and the rejection network, perform very similarly. This can be explained by the strong non-linear function of the character classifier network which effectively removes most of the correlation between character accuracy and all activations other than the winning activation. This suggests that any effective rejection network must receive information from the system which has not been filtered through the non-linear classifier.

  19. Hindcasting of storm waves using neural networks

    Digital Repository Service at National Institute of Oceanography (India)

    Rao, S.; Mandal, S.

    of any exogenous input requirement makes the network attractive. A neural network is an information processing system modeled on the structure of the human brain. Its merit is the ability to deal with fuzzy information whose interrelation is ambiguous...

  20. Drift chamber tracking with neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Lindsey, C.S.; Denby, B.; Haggerty, H.

    1992-10-01

    We discuss drift chamber tracking with a commercial log VLSI neural network chip. Voltages proportional to the drift times in a 4-layer drift chamber were presented to the Intel ETANN chip. The network was trained to provide the intercept and slope of straight tracks traversing the chamber. The outputs were recorded and later compared off line to conventional track fits. Two types of network architectures were studied. Applications of neural network tracking to high energy physics detector triggers is discussed.

  1. Neural network optimization, components, and design selection

    Science.gov (United States)

    Weller, Scott W.

    1991-01-01

    Neural Networks are part of a revived technology which has received a lot of hype in recent years. As is apt to happen in any hyped technology, jargon and predictions make its assimilation and application difficult. Nevertheless, Neural Networks have found use in a number of areas, working on non-trivial and non-contrived problems. For example, one net has been trained to "read", translating English text into phoneme sequences. Other applications of Neural Networks include data base manipulation and the solving of routing and classification types of optimization problems. It was their use in optimization that got me involved with Neural Networks. As it turned out, "optimization" used in this context was somewhat misleading, because while some network configurations could indeed solve certain kinds of optimization problems, the configuring or "training" of a Neural Network itself is an optimization problem, and most of the literature which talked about Neural Nets and optimization in the same breath did not speak to my goal of using Neural Nets to help solve lens optimization problems. I did eventually apply Neural Network to lens optimization, and I will touch on those results. The application of Neural Nets to the problem of lens selection was much more successful, and those results will dominate this paper.

  2. Radiation Behavior of Analog Neural Network Chip

    Science.gov (United States)

    Langenbacher, H.; Zee, F.; Daud, T.; Thakoor, A.

    1996-01-01

    A neural network experiment conducted for the Space Technology Research Vehicle (STRV-1) 1-b launched in June 1994. Identical sets of analog feed-forward neural network chips was used to study and compare the effects of space and ground radiation on the chips. Three failure mechanisms are noted.

  3. Neural network approach to parton distributions fitting

    CERN Document Server

    Piccione, Andrea; Forte, Stefano; Latorre, Jose I.; Rojo, Joan; Piccione, Andrea; Rojo, Joan

    2006-01-01

    We will show an application of neural networks to extract information on the structure of hadrons. A Monte Carlo over experimental data is performed to correctly reproduce data errors and correlations. A neural network is then trained on each Monte Carlo replica via a genetic algorithm. Results on the proton and deuteron structure functions, and on the nonsinglet parton distribution will be shown.

  4. Medical image analysis with artificial neural networks.

    Science.gov (United States)

    Jiang, J; Trundle, P; Ren, J

    2010-12-01

    Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging. Copyright © 2010 Elsevier Ltd. All rights reserved.

  5. Hidden neural networks: application to speech recognition

    DEFF Research Database (Denmark)

    Riis, Søren Kamaric

    1998-01-01

    We evaluate the hidden neural network HMM/NN hybrid on two speech recognition benchmark tasks; (1) task independent isolated word recognition on the Phonebook database, and (2) recognition of broad phoneme classes in continuous speech from the TIMIT database. It is shown how hidden neural networks...

  6. Genetic Algorithm Optimized Neural Networks Ensemble as ...

    African Journals Online (AJOL)

    Improvements in neural network calibration models by a novel approach using neural network ensemble (NNE) for the simultaneous spectrophotometric multicomponent analysis are suggested, with a study on the estimation of the components of an antihypertensive combination, namely, atenolol and losartan potassium.

  7. Neural Networks for Non-linear Control

    DEFF Research Database (Denmark)

    Sørensen, O.

    1994-01-01

    This paper describes how a neural network, structured as a Multi Layer Perceptron, is trained to predict, simulate and control a non-linear process.......This paper describes how a neural network, structured as a Multi Layer Perceptron, is trained to predict, simulate and control a non-linear process....

  8. Application of Neural Networks for Energy Reconstruction

    CERN Document Server

    Damgov, Jordan

    2002-01-01

    The possibility to use Neural Networks for reconstruction ofthe energy deposited in the calorimetry system of the CMS detector is investigated. It is shown that using feed-forward neural network, good linearity, Gaussian energy distribution and good energy resolution can be achieved. Significant improvement of the energy resolution and linearity is reached in comparison with other weighting methods for energy reconstruction.

  9. Neural Network to Solve Concave Games

    OpenAIRE

    Zixin Liu; Nengfa Wang

    2014-01-01

    The issue on neural network method to solve concave games is concerned. Combined with variational inequality, Ky Fan inequality, and projection equation, concave games are transformed into a neural network model. On the basis of the Lyapunov stable theory, some stability results are also given. Finally, two classic games’ simulation results are given to illustrate the theoretical results.

  10. Adaptive Neurons For Artificial Neural Networks

    Science.gov (United States)

    Tawel, Raoul

    1990-01-01

    Training time decreases dramatically. In improved mathematical model of neural-network processor, temperature of neurons (in addition to connection strengths, also called weights, of synapses) varied during supervised-learning phase of operation according to mathematical formalism and not heuristic rule. Evidence that biological neural networks also process information at neuronal level.

  11. Marginalization in Random Nonlinear Neural Networks

    Science.gov (United States)

    Vasudeva Raju, Rajkumar; Pitkow, Xaq

    2015-03-01

    Computations involved in tasks like causal reasoning in the brain require a type of probabilistic inference known as marginalization. Marginalization corresponds to averaging over irrelevant variables to obtain the probability of the variables of interest. This is a fundamental operation that arises whenever input stimuli depend on several variables, but only some are task-relevant. Animals often exhibit behavior consistent with marginalizing over some variables, but the neural substrate of this computation is unknown. It has been previously shown (Beck et al. 2011) that marginalization can be performed optimally by a deterministic nonlinear network that implements a quadratic interaction of neural activity with divisive normalization. We show that a simpler network can perform essentially the same computation. These Random Nonlinear Networks (RNN) are feedforward networks with one hidden layer, sigmoidal activation functions, and normally-distributed weights connecting the input and hidden layers. We train the output weights connecting the hidden units to an output population, such that the output model accurately represents a desired marginal probability distribution without significant information loss compared to optimal marginalization. Simulations for the case of linear coordinate transformations show that the RNN model has good marginalization performance, except for highly uncertain inputs that have low amplitude population responses. Behavioral experiments, based on these results, could then be used to identify if this model does indeed explain how the brain performs marginalization.

  12. Internal model control of inductive magnetic suspension spherical active joints based on fuzzy neural network inverse system

    Directory of Open Access Journals (Sweden)

    Li Zeng

    2015-11-01

    Full Text Available This article puts forward inductive magnetic suspension spherical active joints and has researched on its mechanism. The expression of motor’s electromagnetic torque is derived from the point of power balance of three-dimensional electromagnetic model, and on the basis of the air gap magnetic flux density distribution, we establish the joint’s mathematical model of electromagnetic levitation force. The relationship between the two of displacement, angle, and current and the transfer function expression of motor system are derived by the state equation and the inverse system theory We established the inverse system of joint’s original system using fuzzy neural network theory and simplified coupling relationship of the motor’s complex multivariable to establish ANFIS model of joint’s inverse system. An internal model controller with high robustness and stability was designed, and an internal model control joint pseudo linear system was built. According to the simulation analysis and experimental verification of the joint control system, the conclusion indicates that the rotor has quick dynamic response and high robustness.

  13. Global synchronization in finite time for fractional-order neural networks with discontinuous activations and time delays.

    Science.gov (United States)

    Peng, Xiao; Wu, Huaiqin; Song, Ka; Shi, Jiaxin

    2017-10-01

    This paper is concerned with the global Mittag-Leffler synchronization and the synchronization in finite time for fractional-order neural networks (FNNs) with discontinuous activations and time delays. Firstly, the properties with respect to Mittag-Leffler convergence and convergence in finite time, which play a critical role in the investigation of the global synchronization of FNNs, are developed, respectively. Secondly, the novel state-feedback controller, which includes time delays and discontinuous factors, is designed to realize the synchronization goal. By applying the fractional differential inclusion theory, inequality analysis technique and the proposed convergence properties, the sufficient conditions to achieve the global Mittag-Leffler synchronization and the synchronization in finite time are addressed in terms of linear matrix inequalities (LMIs). In addition, the upper bound of the setting time of the global synchronization in finite time is explicitly evaluated. Finally, two examples are given to demonstrate the validity of the proposed design method and theoretical results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Decoding of visual activity patterns from fMRI responses using multivariate pattern analyses and convolutional neural network.

    Science.gov (United States)

    Zafar, Raheel; Kamel, Nidal; Naufal, Mohamad; Malik, Aamir Saeed; Dass, Sarat C; Ahmad, Rana Fayyaz; Abdullah, Jafri M; Reza, Faruque

    2017-01-01

    Decoding of human brain activity has always been a primary goal in neuroscience especially with functional magnetic resonance imaging (fMRI) data. In recent years, Convolutional neural network (CNN) has become a popular method for the extraction of features due to its higher accuracy, however it needs a lot of computation and training data. In this study, an algorithm is developed using Multivariate pattern analysis (MVPA) and modified CNN to decode the behavior of brain for different images with limited data set. Selection of significant features is an important part of fMRI data analysis, since it reduces the computational burden and improves the prediction performance; significant features are selected using t-test. MVPA uses machine learning algorithms to classify different brain states and helps in prediction during the task. General linear model (GLM) is used to find the unknown parameters of every individual voxel and the classification is done using multi-class support vector machine (SVM). MVPA-CNN based proposed algorithm is compared with region of interest (ROI) based method and MVPA based estimated values. The proposed method showed better overall accuracy (68.6%) compared to ROI (61.88%) and estimation values (64.17%).

  15. Initialization of multilayer forecasting artifical neural networks

    OpenAIRE

    Bochkarev, Vladimir V.; Maslennikova, Yulia S.

    2014-01-01

    In this paper, a new method was developed for initialising artificial neural networks predicting dynamics of time series. Initial weighting coefficients were determined for neurons analogously to the case of a linear prediction filter. Moreover, to improve the accuracy of the initialization method for a multilayer neural network, some variants of decomposition of the transformation matrix corresponding to the linear prediction filter were suggested. The efficiency of the proposed neural netwo...

  16. QSAR modelling using combined simple competitive learning networks and RBF neural networks.

    Science.gov (United States)

    Sheikhpour, R; Sarram, M A; Rezaeian, M; Sheikhpour, E

    2018-04-01

    The aim of this study was to propose a QSAR modelling approach based on the combination of simple competitive learning (SCL) networks with radial basis function (RBF) neural networks for predicting the biological activity of chemical compounds. The proposed QSAR method consisted of two phases. In the first phase, an SCL network was applied to determine the centres of an RBF neural network. In the second phase, the RBF neural network was used to predict the biological activity of various phenols and Rho kinase (ROCK) inhibitors. The predictive ability of the proposed QSAR models was evaluated and compared with other QSAR models using external validation. The results of this study showed that the proposed QSAR modelling approach leads to better performances than other models in predicting the biological activity of chemical compounds. This indicated the efficiency of simple competitive learning networks in determining the centres of RBF neural networks.

  17. International Conference on Artificial Neural Networks (ICANN)

    CERN Document Server

    Mladenov, Valeri; Kasabov, Nikola; Artificial Neural Networks : Methods and Applications in Bio-/Neuroinformatics

    2015-01-01

    The book reports on the latest theories on artificial neural networks, with a special emphasis on bio-neuroinformatics methods. It includes twenty-three papers selected from among the best contributions on bio-neuroinformatics-related issues, which were presented at the International Conference on Artificial Neural Networks, held in Sofia, Bulgaria, on September 10-13, 2013 (ICANN 2013). The book covers a broad range of topics concerning the theory and applications of artificial neural networks, including recurrent neural networks, super-Turing computation and reservoir computing, double-layer vector perceptrons, nonnegative matrix factorization, bio-inspired models of cell communities, Gestalt laws, embodied theory of language understanding, saccadic gaze shifts and memory formation, and new training algorithms for Deep Boltzmann Machines, as well as dynamic neural networks and kernel machines. It also reports on new approaches to reinforcement learning, optimal control of discrete time-delay systems, new al...

  18. Neural Based Orthogonal Data Fitting The EXIN Neural Networks

    CERN Document Server

    Cirrincione, Giansalvo

    2008-01-01

    Written by three leaders in the field of neural based algorithms, Neural Based Orthogonal Data Fitting proposes several neural networks, all endowed with a complete theory which not only explains their behavior, but also compares them with the existing neural and traditional algorithms. The algorithms are studied from different points of view, including: as a differential geometry problem, as a dynamic problem, as a stochastic problem, and as a numerical problem. All algorithms have also been analyzed on real time problems (large dimensional data matrices) and have shown accurate solutions. Wh

  19. Clustering: a neural network approach.

    Science.gov (United States)

    Du, K-L

    2010-01-01

    Clustering is a fundamental data analysis method. It is widely used for pattern recognition, feature extraction, vector quantization (VQ), image segmentation, function approximation, and data mining. As an unsupervised classification technique, clustering identifies some inherent structures present in a set of objects based on a similarity measure. Clustering methods can be based on statistical model identification (McLachlan & Basford, 1988) or competitive learning. In this paper, we give a comprehensive overview of competitive learning based clustering methods. Importance is attached to a number of competitive learning based clustering neural networks such as the self-organizing map (SOM), the learning vector quantization (LVQ), the neural gas, and the ART model, and clustering algorithms such as the C-means, mountain/subtractive clustering, and fuzzy C-means (FCM) algorithms. Associated topics such as the under-utilization problem, fuzzy clustering, robust clustering, clustering based on non-Euclidean distance measures, supervised clustering, hierarchical clustering as well as cluster validity are also described. Two examples are given to demonstrate the use of the clustering methods.

  20. Complex-valued Neural Networks

    Science.gov (United States)

    Hirose, Akira

    This paper reviews the features and applications of complex-valued neural networks (CVNNs). First we list the present application fields, and describe the advantages of the CVNNs in two application examples, namely, an adaptive plastic-landmine visualization system and an optical frequency-domain-multiplexed learning logic circuit. Then we briefly discuss the features of complex number itself to find that the phase rotation is the most significant concept, which is very useful in processing the information related to wave phenomena such as lightwave and electromagnetic wave. The CVNNs will also be an indispensable framework of the future microelectronic information-processing hardware where the quantum electron wave plays the principal role.

  1. Collision avoidance using neural networks

    Science.gov (United States)

    Sugathan, Shilpa; Sowmya Shree, B. V.; Warrier, Mithila R.; Vidhyapathi, C. M.

    2017-11-01

    Now a days, accidents on roads are caused due to the negligence of drivers and pedestrians or due to unexpected obstacles that come into the vehicle’s path. In this paper, a model (robot) is developed to assist drivers for a smooth travel without accidents. It reacts to the real time obstacles on the four critical sides of the vehicle and takes necessary action. The sensor used for detecting the obstacle was an IR proximity sensor. A single layer perceptron neural network is used to train and test all possible combinations of sensors result by using Matlab (offline). A microcontroller (ARM Cortex-M3 LPC1768) is used to control the vehicle through the output data which is received from Matlab via serial communication. Hence, the vehicle becomes capable of reacting to any combination of real time obstacles.

  2. A renaissance of neural networks in drug discovery.

    Science.gov (United States)

    Baskin, Igor I; Winkler, David; Tetko, Igor V

    2016-08-01

    Neural networks are becoming a very popular method for solving machine learning and artificial intelligence problems. The variety of neural network types and their application to drug discovery requires expert knowledge to choose the most appropriate approach. In this review, the authors discuss traditional and newly emerging neural network approaches to drug discovery. Their focus is on backpropagation neural networks and their variants, self-organizing maps and associated methods, and a relatively new technique, deep learning. The most important technical issues are discussed including overfitting and its prevention through regularization, ensemble and multitask modeling, model interpretation, and estimation of applicability domain. Different aspects of using neural networks in drug discovery are considered: building structure-activity models with respect to various targets; predicting drug selectivity, toxicity profiles, ADMET and physicochemical properties; characteristics of drug-delivery systems and virtual screening. Neural networks continue to grow in importance for drug discovery. Recent developments in deep learning suggests further improvements may be gained in the analysis of large chemical data sets. It's anticipated that neural networks will be more widely used in drug discovery in the future, and applied in non-traditional areas such as drug delivery systems, biologically compatible materials, and regenerative medicine.

  3. Characterization of Early Cortical Neural Network ...

    Science.gov (United States)

    We examined the development of neural network activity using microelectrode array (MEA) recordings made in multi-well MEA plates (mwMEAs) over the first 12 days in vitro (DIV). In primary cortical cultures made from postnatal rats, action potential spiking activity was essentially absent on DIV 2 and developed rapidly between DIV 5 and 12. Spiking activity was primarily sporadic and unorganized at early DIV, and became progressively more organized with time in culture, with bursting parameters, synchrony and network bursting increasing between DIV 5 and 12. We selected 12 features to describe network activity and principal components analysis using these features demonstrated a general segregation of data by age at both the well and plate levels. Using a combination of random forest classifiers and Support Vector Machines, we demonstrated that 4 features (CV of within burst ISI, CV of IBI, network spike rate and burst rate) were sufficient to predict the age (either DIV 5, 7, 9 or 12) of each well recording with >65% accuracy. When restricting the classification problem to a binary decision, we found that classification improved dramatically, e.g. 95% accuracy for discriminating DIV 5 vs DIV 12 wells. Further, we present a novel resampling approach to determine the number of wells that might be needed for conducting comparisons of different treatments using mwMEA plates. Overall, these results demonstrate that network development on mwMEA plates is similar to

  4. Quantum generalisation of feedforward neural networks

    Science.gov (United States)

    Wan, Kwok Ho; Dahlsten, Oscar; Kristjánsson, Hlér; Gardner, Robert; Kim, M. S.

    2017-09-01

    We propose a quantum generalisation of a classical neural network. The classical neurons are firstly rendered reversible by adding ancillary bits. Then they are generalised to being quantum reversible, i.e., unitary (the classical networks we generalise are called feedforward, and have step-function activation functions). The quantum network can be trained efficiently using gradient descent on a cost function to perform quantum generalisations of classical tasks. We demonstrate numerically that it can: (i) compress quantum states onto a minimal number of qubits, creating a quantum autoencoder, and (ii) discover quantum communication protocols such as teleportation. Our general recipe is theoretical and implementation-independent. The quantum neuron module can naturally be implemented photonically.

  5. Program Aids Simulation Of Neural Networks

    Science.gov (United States)

    Baffes, Paul T.

    1990-01-01

    Computer program NETS - Tool for Development and Evaluation of Neural Networks - provides simulation of neural-network algorithms plus software environment for development of such algorithms. Enables user to customize patterns of connections between layers of network, and provides features for saving weight values of network, providing for more precise control over learning process. Consists of translating problem into format using input/output pairs, designing network configuration for problem, and finally training network with input/output pairs until acceptable error reached. Written in C.

  6. Applications of response surface methodology and artificial neural network for decolorization of distillery spent wash by using activated Piper nigrum.

    Science.gov (United States)

    Arulmathi, P; Elangovan, G

    2016-11-01

    Ethanol production from sugarcane molasses yields large volume of highly colored spent wash as effluent. This color is imparted by the recalcitrant melanoidin pigment produced due to the Maillard reaction. In the present work, decolourization of melanoidin was carried out using activated carbon prepared from pepper stem (Piper nigrum). The interaction effect between parameters were studied by response surface methodology using central composite design and maximum decolourization of 75 % was obtained at pH 7.5, Melanoidin concentration of 32.5 mg l-1 with 1.63 g 100ml-1 of adsorbent for 2hr 75min. Artificial neural networks was also used to optimize the process parameters, giving 74 % decolourization for the same parameters. The Langmuir and Freundich isotherms were applied for describing the biosorption equilibrium. The process was represented by the Langmuir isotherm with a correlation coefficient of 0.94. The first-order, second-order models were implemented for demonstrating the biosorption mechanism and, as a result, Pseudo second order model kinetics fitted best to the experimental data. The estimated enthalpy change (DH) and entropy change (DS) of adsorption were 32.195 kJ mol-1 and 115.44 J mol-1 K which indicates that the adsorption of melanoidin was an endothermic process. Continuous adsorption studies were conducted under optimized condition. The breakthrough curve analysis was determined using the experimental data obtained from continuous adsorption. Continuous column studies gave a breakthrough at 182 mins and 176 ml. It was concluded that column packed with Piper nigrum based activated carbon can be used to remove color from distillery spent wash.

  7. Learning Processes of Layered Neural Networks

    OpenAIRE

    Fujiki, Sumiyoshi; FUJIKI, Nahomi, M.

    1995-01-01

    A positive reinforcement type learning algorithm is formulated for a stochastic feed-forward neural network, and a learning equation similar to that of the Boltzmann machine algorithm is obtained. By applying a mean field approximation to the same stochastic feed-forward neural network, a deterministic analog feed-forward network is obtained and the back-propagation learning rule is re-derived.

  8. Neural-Network Control Of Prosthetic And Robotic Hands

    Science.gov (United States)

    Buckley, Theresa M.

    1991-01-01

    Electronic neural networks proposed for use in controlling robotic and prosthetic hands and exoskeletal or glovelike electromechanical devices aiding intact but nonfunctional hands. Specific to patient, who activates grasping motion by voice command, by mechanical switch, or by myoelectric impulse. Patient retains higher-level control, while lower-level control provided by neural network analogous to that of miniature brain. During training, patient teaches miniature brain to perform specialized, anthropomorphic movements unique to himself or herself.

  9. Modeling the dynamics of human brain activity with recurrent neural networks

    NARCIS (Netherlands)

    Güçlü, U.; Gerven, M.A.J. van

    2017-01-01

    Encoding models are used for predicting brain activity in response to sensory stimuli with the objective of elucidating how sensory information is represented in the brain. Encoding models typically comprise a nonlinear transformation of stimuli to features (feature model) and a linear convolution

  10. Duct Modeling Using the Generalized RBF Neural Network for Active Cancellation of Variable Frequency Narrow Band Noise

    Directory of Open Access Journals (Sweden)

    Lotfizad Mojtaba

    2007-01-01

    Full Text Available We have shown that duct modeling using the generalized RBF neural network (DM_RBF, which has the capability of modeling the nonlinear behavior, can suppress a variable-frequency narrow band noise of a duct more efficiently than an FX-LMS algorithm. In our method (DM_RBF, at first the duct is identified using a generalized RBF network, after that stage of time delay of the input signal to the generalized RBF network is applied, then a linear combiner at their outputs makes an online identification of the nonlinear system. The weights of linear combiner are updated by the normalized LMS algorithm. We have showed that the proposed method is more than three times faster in comparison with the FX-LMS algorithm with 30% lower error. Also the DM_RBF method will converge in changing the input frequency, while it makes the FX-LMS cause divergence.

  11. Modular representation of layered neural networks.

    Science.gov (United States)

    Watanabe, Chihiro; Hiramatsu, Kaoru; Kashino, Kunio

    2018-01-01

    Layered neural networks have greatly improved the performance of various applications including image processing, speech recognition, natural language processing, and bioinformatics. However, it is still difficult to discover or interpret knowledge from the inference provided by a layered neural network, since its internal representation has many nonlinear and complex parameters embedded in hierarchical layers. Therefore, it becomes important to establish a new methodology by which layered neural networks can be understood. In this paper, we propose a new method for extracting a global and simplified structure from a layered neural network. Based on network analysis, the proposed method detects communities or clusters of units with similar connection patterns. We show its effectiveness by applying it to three use cases. (1) Network decomposition: it can decompose a trained neural network into multiple small independent networks thus dividing the problem and reducing the computation time. (2) Training assessment: the appropriateness of a trained result with a given hyperparameter or randomly chosen initial parameters can be evaluated by using a modularity index. And (3) data analysis: in practical data it reveals the community structure in the input, hidden, and output layers, which serves as a clue for discovering knowledge from a trained neural network. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Research of The Deeper Neural Networks

    Directory of Open Access Journals (Sweden)

    Xiao You Rong

    2016-01-01

    Full Text Available Neural networks (NNs have powerful computational abilities and could be used in a variety of applications; however, training these networks is still a difficult problem. With different network structures, many neural models have been constructed. In this report, a deeper neural networks (DNNs architecture is proposed. The training algorithm of deeper neural network insides searching the global optimal point in the actual error surface. Before the training algorithm is designed, the error surface of the deeper neural network is analyzed from simple to complicated, and the features of the error surface is obtained. Based on these characters, the initialization method and training algorithm of DNNs is designed. For the initialization, a block-uniform design method is proposed which separates the error surface into some blocks and finds the optimal block using the uniform design method. For the training algorithm, the improved gradient-descent method is proposed which adds a penalty term into the cost function of the old gradient descent method. This algorithm makes the network have a great approximating ability and keeps the network state stable. All of these improve the practicality of the neural network.

  13. Neural network topology design for nonlinear control

    Science.gov (United States)

    Haecker, Jens; Rudolph, Stephan

    2001-03-01

    Neural networks, especially in nonlinear system identification and control applications, are typically considered to be black-boxes which are difficult to analyze and understand mathematically. Due to this reason, an in- depth mathematical analysis offering insight into the different neural network transformation layers based on a theoretical transformation scheme is desired, but up to now neither available nor known. In previous works it has been shown how proven engineering methods such as dimensional analysis and the Laplace transform may be used to construct a neural controller topology for time-invariant systems. Using the knowledge of neural correspondences of these two classical methods, the internal nodes of the network could also be successfully interpreted after training. As further extension to these works, the paper describes the latest of a theoretical interpretation framework describing the neural network transformation sequences in nonlinear system identification and control. This can be achieved By incorporation of the method of exact input-output linearization in the above mentioned two transform sequences of dimensional analysis and the Laplace transformation. Based on these three theoretical considerations neural network topologies may be designed in special situations by pure translation in the sense of a structural compilation of the known classical solutions into their correspondent neural topology. Based on known exemplary results, the paper synthesizes the proposed approach into the visionary goals of a structural compiler for neural networks. This structural compiler for neural networks is intended to automatically convert classical control formulations into their equivalent neural network structure based on the principles of equivalence between formula and operator, and operator and structure which are discussed in detail in this work.

  14. A 3D Active Learning Application for NeMO-Net, the NASA Neural Multi-Modal Observation and Training Network for Global Coral Reef Assessment

    Science.gov (United States)

    van den Bergh, Jarrett; Schutz, Joey; Li, Alan; Chirayath, Ved

    2017-01-01

    NeMO-Net, the NASA neural multi-modal observation and training network for global coral reef assessment, is an open-source deep convolutional neural network and interactive active learning training software aiming to accurately assess the present and past dynamics of coral reef ecosystems through determination of percent living cover and morphology as well as mapping of spatial distribution. We present an interactive video game prototype for tablet and mobile devices where users interactively label morphology classifications over mm-scale 3D coral reef imagery captured using fluid lensing to create a dataset that will be used to train NeMO-Nets convolutional neural network. The application currently allows for users to classify preselected regions of coral in the Pacific and will be expanded to include additional regions captured using our NASA FluidCam instrument, presently the highest-resolution remote sensing benthic imaging technology capable of removing ocean wave distortion, as well as lower-resolution airborne remote sensing data from the ongoing NASA CORAL campaign. Active learning applications present a novel methodology for efficiently training large-scale Neural Networks wherein variances in identification can be rapidly mitigated against control data. NeMO-Net periodically checks users input against pre-classified coral imagery to gauge their accuracy and utilize in-game mechanics to provide classification training. Users actively communicate with a server and are requested to classify areas of coral for which other users had conflicting classifications and contribute their input to a larger database for ranking. In partnering with Mission Blue and IUCN, NeMO-Net leverages an international consortium of subject matter experts to classify areas of confusion identified by NeMO-Net and generate additional labels crucial for identifying decision boundary locations in coral reef assessment.

  15. Computerized Cognitive Training Restores Neural Activity within the Reality Monitoring Network in Schizophrenia

    OpenAIRE

    Subramaniam, Karuna; Luks, Tracy L; Fisher, Melissa; Simpson, Gregory V; Nagarajan, Srikantan; Vinogradov, Sophia

    2012-01-01

    Schizophrenia patients suffer from severe cognitive deficits, such as impaired reality monitoring. Reality monitoring is the ability to distinguish the source of internal experiences from outside reality. During reality monitoring tasks, schizophrenia patients make errors identifying “I made it up” items, and even during accurate performance, they show abnormally low activation of the medial prefrontal cortex (mPFC), a region that supports self-referential cognition. We administered 80 hours ...

  16. Application of CMAC Neural Network Coupled with Active Disturbance Rejection Control Strategy on Three-motor Synchronization Control System

    Directory of Open Access Journals (Sweden)

    Hui Li

    2014-04-01

    Full Text Available Three-motor synchronous coordination system is a MI-MO nonlinear and complex control system. And it often works in poor working condition. Advanced control strategies are required to improve the control performance of the system and to achieve the decoupling between main motor speed and tension. Cerebellar Model Articulation Controller coupled with Active Disturbance Rejection Control (CMAC-ADRC control strategy is proposed. The speed of the main motor and tensions between two motors is decoupled by extended state observer (ESO in ADRC. ESO in ADRC is used to compensate internal and external disturbances of the system online. And the anti interference of the system is improved by ESO. And the same time the control model is optimized. Feedforward control is implemented by the adoption of CMAC neural network controller. And control precision of the system is improved in reason of CMAC. The overshoot of the system can be reduced without affecting the dynamic response of the system by the use of CMAC-ADRC. The simulation results show that: the CMAC- ADRC control strategy is better than the traditional PID control strategy. And CMAC-ADRC control strategy can achieve the decoupling between speed and tension. The control system using CMAC-ADRC have strong anti-interference ability and small regulate time and small overshoot. The magnitude of the system response incited by the interference using CMAC-ADRC is smaller than the system using conventional PID control 6.43 %. And the recovery time of the system with CMAC-ADRC is shorter than the system with traditional PID control 0.18 seconds. And the triangular wave tracking error of the system with CMAC-ADRC is smaller than the system with conventional PID control 0.24 rad/min. Thus the CMAC-ADRC control strategy is a good control strategy and is able to fit three-motor synchronous coordinated control.

  17. Genetic algorithm for neural networks optimization

    Science.gov (United States)

    Setyawati, Bina R.; Creese, Robert C.; Sahirman, Sidharta

    2004-11-01

    This paper examines the forecasting performance of multi-layer feed forward neural networks in modeling a particular foreign exchange rates, i.e. Japanese Yen/US Dollar. The effects of two learning methods, Back Propagation and Genetic Algorithm, in which the neural network topology and other parameters fixed, were investigated. The early results indicate that the application of this hybrid system seems to be well suited for the forecasting of foreign exchange rates. The Neural Networks and Genetic Algorithm were programmed using MATLAB«.

  18. Estimation of Conditional Quantile using Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1999-01-01

    The problem of estimating conditional quantiles using neural networks is investigated here. A basic structure is developed using the methodology of kernel estimation, and a theory guaranteeing con-sistency on a mild set of assumptions is provided. The constructed structure constitutes a basis...... for the design of a variety of different neural networks, some of which are considered in detail. The task of estimating conditional quantiles is related to Bayes point estimation whereby a broad range of applications within engineering, economics and management can be suggested. Numerical results illustrating...... the capabilities of the elaborated neural network are also given....

  19. Vectorized algorithms for spiking neural network simulation.

    Science.gov (United States)

    Brette, Romain; Goodman, Dan F M

    2011-06-01

    High-level languages (Matlab, Python) are popular in neuroscience because they are flexible and accelerate development. However, for simulating spiking neural networks, the cost of interpretation is a bottleneck. We describe a set of algorithms to simulate large spiking neural networks efficiently with high-level languages using vector-based operations. These algorithms constitute the core of Brian, a spiking neural network simulator written in the Python language. Vectorized simulation makes it possible to combine the flexibility of high-level languages with the computational efficiency usually associated with compiled languages.

  20. Convolutional Neural Network for Image Recognition

    CERN Document Server

    Seifnashri, Sahand

    2015-01-01

    The aim of this project is to use machine learning techniques especially Convolutional Neural Networks for image processing. These techniques can be used for Quark-Gluon discrimination using calorimeters data, but unfortunately I didn’t manage to get the calorimeters data and I just used the Jet data fromminiaodsim(ak4 chs). The Jet data was not good enough for Convolutional Neural Network which is designed for ’image’ recognition. This report is made of twomain part, part one is mainly about implementing Convolutional Neural Network on unphysical data such as MNIST digits and CIFAR-10 dataset and part 2 is about the Jet data.

  1. Neural Network and Letter Recognition.

    Science.gov (United States)

    Lee, Hue Yeon

    Neural net architectures and learning algorithms that recognize hand written 36 alphanumeric characters are studied. The thin line input patterns written in 32 x 32 binary array are used. The system is comprised of two major components, viz. a preprocessing unit and a Recognition unit. The preprocessing unit in turn consists of three layers of neurons; the U-layer, the V-layer, and the C -layer. The functions of the U-layer is to extract local features by template matching. The correlation between the detected local features are considered. Through correlating neurons in a plane with their neighboring neurons, the V-layer would thicken the on-cells or lines that are groups of on-cells of the previous layer. These two correlations would yield some deformation tolerance and some of the rotational tolerance of the system. The C-layer then compresses data through the 'Gabor' transform. Pattern dependent choice of center and wavelengths of 'Gabor' filters is the cause of shift and scale tolerance of the system. Three different learning schemes had been investigated in the recognition unit, namely; the error back propagation learning with hidden units, a simple perceptron learning, and a competitive learning. Their performances were analyzed and compared. Since sometimes the network fails to distinguish between two letters that are inherently similar, additional ambiguity resolving neural nets are introduced on top of the above main neural net. The two dimensional Fourier transform is used as the preprocessing and the perceptron is used as the recognition unit of the ambiguity resolver. One hundred different person's handwriting sets are collected. Some of these are used as the training sets and the remainders are used as the test sets. The correct recognition rate of the system increases with the number of training sets and eventually saturates at a certain value. Similar recognition rates are obtained for the above three different learning algorithms. The minimum error

  2. Guidance for the verification and validation of neural networks

    CERN Document Server

    Pullum, L; Darrah, M

    2007-01-01

    Guidance for the Verification and Validation of Neural Networks is a supplement to the IEEE Standard for Software Verification and Validation, IEEE Std 1012-1998. Born out of a need by the National Aeronautics and Space Administration's safety- and mission-critical research, this book compiles over five years of applied research and development efforts. It is intended to assist the performance of verification and validation (V&V) activities on adaptive software systems, with emphasis given to neural network systems. The book discusses some of the difficulties with trying to assure adaptive systems in general, presents techniques and advice for the V&V practitioner confronted with such a task, and based on a neural network case study, identifies specific tasking and recommendations for the V&V of neural network systems.

  3. Phase diagram of spiking neural networks.

    Science.gov (United States)

    Seyed-Allaei, Hamed

    2015-01-01

    In computer simulations of spiking neural networks, often it is assumed that every two neurons of the network are connected by a probability of 2%, 20% of neurons are inhibitory and 80% are excitatory. These common values are based on experiments, observations, and trials and errors, but here, I take a different perspective, inspired by evolution, I systematically simulate many networks, each with a different set of parameters, and then I try to figure out what makes the common values desirable. I stimulate networks with pulses and then measure their: dynamic range, dominant frequency of population activities, total duration of activities, maximum rate of population and the occurrence time of maximum rate. The results are organized in phase diagram. This phase diagram gives an insight into the space of parameters - excitatory to inhibitory ratio, sparseness of connections and synaptic weights. This phase diagram can be used to decide the parameters of a model. The phase diagrams show that networks which are configured according to the common values, have a good dynamic range in response to an impulse and their dynamic range is robust in respect to synaptic weights, and for some synaptic weights they oscillates in α or β frequencies, independent of external stimuli.

  4. Nonequilibrium landscape theory of neural networks

    Science.gov (United States)

    Yan, Han; Zhao, Lei; Hu, Liang; Wang, Xidi; Wang, Erkang; Wang, Jin

    2013-01-01

    The brain map project aims to map out the neuron connections of the human brain. Even with all of the wirings mapped out, the global and physical understandings of the function and behavior are still challenging. Hopfield quantified the learning and memory process of symmetrically connected neural networks globally through equilibrium energy. The energy basins of attractions represent memories, and the memory retrieval dynamics is determined by the energy gradient. However, the realistic neural networks are asymmetrically connected, and oscillations cannot emerge from symmetric neural networks. Here, we developed a nonequilibrium landscape–flux theory for realistic asymmetrically connected neural networks. We uncovered the underlying potential landscape and the associated Lyapunov function for quantifying the global stability and function. We found the dynamics and oscillations in human brains responsible for cognitive processes and physiological rhythm regulations are determined not only by the landscape gradient but also by the flux. We found that the flux is closely related to the degrees of the asymmetric connections in neural networks and is the origin of the neural oscillations. The neural oscillation landscape shows a closed-ring attractor topology. The landscape gradient attracts the network down to the ring. The flux is responsible for coherent oscillations on the ring. We suggest the flux may provide the driving force for associations among memories. We applied our theory to rapid-eye movement sleep cycle. We identified the key regulation factors for function through global sensitivity analysis of landscape topography against wirings, which are in good agreements with experiments. PMID:24145451

  5. Neural Network for Estimating Conditional Distribution

    DEFF Research Database (Denmark)

    Schiøler, Henrik; Kulczycki, P.

    Neural networks for estimating conditional distributions and their associated quantiles are investigated in this paper. A basic network structure is developed on the basis of kernel estimation theory, and consistency is proved from a mild set of assumptions. A number of applications within...... statistcs, decision theory and signal processing are suggested, and a numerical example illustrating the capabilities of the elaborated network is given...

  6. Optimization of multilayer neural network parameters for speaker recognition

    Science.gov (United States)

    Tovarek, Jaromir; Partila, Pavol; Rozhon, Jan; Voznak, Miroslav; Skapa, Jan; Uhrin, Dominik; Chmelikova, Zdenka

    2016-05-01

    This article discusses the impact of multilayer neural network parameters for speaker identification. The main task of speaker identification is to find a specific person in the known set of speakers. It means that the voice of an unknown speaker (wanted person) belongs to a group of reference speakers from the voice database. One of the requests was to develop the text-independent system, which means to classify wanted person regardless of content and language. Multilayer neural network has been used for speaker identification in this research. Artificial neural network (ANN) needs to set parameters like activation function of neurons, steepness of activation functions, learning rate, the maximum number of iterations and a number of neurons in the hidden and output layers. ANN accuracy and validation time are directly influenced by the parameter settings. Different roles require different settings. Identification accuracy and ANN validation time were evaluated with the same input data but different parameter settings. The goal was to find parameters for the neural network with the highest precision and shortest validation time. Input data of neural networks are a Mel-frequency cepstral coefficients (MFCC). These parameters describe the properties of the vocal tract. Audio samples were recorded for all speakers in a laboratory environment. Training, testing and validation data set were split into 70, 15 and 15 %. The result of the research described in this article is different parameter setting for the multilayer neural network for four speakers.

  7. Person Movement Prediction Using Neural Networks

    OpenAIRE

    Vintan, Lucian; Gellert, Arpad; Petzold, Jan; Ungerer, Theo

    2006-01-01

    Ubiquitous systems use context information to adapt appliance behavior to human needs. Even more convenience is reached if the appliance foresees the user's desires and acts proactively. This paper proposes neural prediction techniques to anticipate a person's next movement. We focus on neural predictors (multi-layer perceptron with back-propagation learning) with and without pre-training. The optimal configuration of the neural network is determined by evaluating movement sequences of real p...

  8. Deep Learning Neural Networks and Bayesian Neural Networks in Data Analysis

    Science.gov (United States)

    Chernoded, Andrey; Dudko, Lev; Myagkov, Igor; Volkov, Petr

    2017-10-01

    Most of the modern analyses in high energy physics use signal-versus-background classification techniques of machine learning methods and neural networks in particular. Deep learning neural network is the most promising modern technique to separate signal and background and now days can be widely and successfully implemented as a part of physical analysis. In this article we compare Deep learning and Bayesian neural networks application as a classifiers in an instance of top quark analysis.

  9. Deep Learning Neural Networks and Bayesian Neural Networks in Data Analysis

    Directory of Open Access Journals (Sweden)

    Chernoded Andrey

    2017-01-01

    Full Text Available Most of the modern analyses in high energy physics use signal-versus-background classification techniques of machine learning methods and neural networks in particular. Deep learning neural network is the most promising modern technique to separate signal and background and now days can be widely and successfully implemented as a part of physical analysis. In this article we compare Deep learning and Bayesian neural networks application as a classifiers in an instance of top quark analysis.

  10. Coherency and connectivity in oscillating neural networks: linear partialization analysis

    NARCIS (Netherlands)

    Kalitzin, S.; van Dijk, B. W.; Spekreijse, H.; van Leeuwen, W. A.

    1997-01-01

    This paper studies the relation between the functional synaptic connections between two artificial neural networks and the correlation of their spiking activities. The model neurons had realistic non-oscillatory dynamic properties and the networks showed oscillatory behavior as a result of their

  11. [Medical use of artificial neural networks].

    Science.gov (United States)

    Molnár, B; Papik, K; Schaefer, R; Dombóvári, Z; Fehér, J; Tulassay, Z

    1998-01-04

    The main aim of the research in medical diagnostics is to develop more exact, cost-effective and handsome systems, procedures and methods for supporting the clinicians. In their paper the authors introduce a new method that recently came into the focus referred to as artificial neural networks. Based on the literature of the past 5-6 years they give a brief review--highlighting the most important ones--showing the idea behind neural networks, what they are used for in the medical field. The definition, structure and operation of neural networks are discussed. In the application part they collect examples in order to give an insight in the neural network application research. It is emphasised that in the near future basically new diagnostic equipment can be developed based on this new technology in the field of ECG, EEG and macroscopic and microscopic image analysis systems.

  12. Application of neural networks in coastal engineering

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.

    methods. That is why it is becoming popular in various fields including coastal engineering. Waves and tides will play important roles in coastal erosion or accretion. This paper briefly describes the back-propagation neural networks and its application...

  13. Additive Feed Forward Control with Neural Networks

    DEFF Research Database (Denmark)

    Sørensen, O.

    1999-01-01

    This paper demonstrates a method to control a non-linear, multivariable, noisy process using trained neural networks. The basis for the method is a trained neural network controller acting as the inverse process model. A training method for obtaining such an inverse process model is applied....... A suitable 'shaped' (low-pass filtered) reference is used to overcome problems with excessive control action when using a controller acting as the inverse process model. The control concept is Additive Feed Forward Control, where the trained neural network controller, acting as the inverse process model......, is placed in a supplementary pure feed-forward path to an existing feedback controller. This concept benefits from the fact, that an existing, traditional designed, feedback controller can be retained without any modifications, and after training the connection of the neural network feed-forward controller...

  14. Blood glucose prediction using neural network

    Science.gov (United States)

    Soh, Chit Siang; Zhang, Xiqin; Chen, Jianhong; Raveendran, P.; Soh, Phey Hong; Yeo, Joon Hock

    2008-02-01

    We used neural network for blood glucose level determination in this study. The data set used in this study was collected using a non-invasive blood glucose monitoring system with six laser diodes, each laser diode operating at distinct near infrared wavelength between 1500nm and 1800nm. The neural network is specifically used to determine blood glucose level of one individual who participated in an oral glucose tolerance test (OGTT) session. Partial least squares regression is also used for blood glucose level determination for the purpose of comparison with the neural network model. The neural network model performs better in the prediction of blood glucose level as compared with the partial least squares model.

  15. PREDIKSI FOREX MENGGUNAKAN MODEL NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    R. Hadapiningradja Kusumodestoni

    2015-11-01

    Full Text Available ABSTRAK Prediksi adalah salah satu teknik yang paling penting dalam menjalankan bisnis forex. Keputusan dalam memprediksi adalah sangatlah penting, karena dengan prediksi dapat membantu mengetahui nilai forex di waktu tertentu kedepan sehingga dapat mengurangi resiko kerugian. Tujuan dari penelitian ini dimaksudkan memprediksi bisnis fores menggunakan model neural network dengan data time series per 1 menit untuk mengetahui nilai akurasi prediksi sehingga dapat mengurangi resiko dalam menjalankan bisnis forex. Metode penelitian pada penelitian ini meliputi metode pengumpulan data kemudian dilanjutkan ke metode training, learning, testing menggunakan neural network. Setelah di evaluasi hasil penelitian ini menunjukan bahwa penerapan algoritma Neural Network mampu untuk memprediksi forex dengan tingkat akurasi prediksi 0.431 +/- 0.096 sehingga dengan prediksi ini dapat membantu mengurangi resiko dalam menjalankan bisnis forex. Kata kunci: prediksi, forex, neural network.

  16. Using Neural Networks in Diagnosing Breast Cancer

    National Research Council Canada - National Science Library

    Fogel, David

    1997-01-01

    .... In the current study, evolutionary programming is used to train neural networks and linear discriminant models to detect breast cancer in suspicious and microcalcifications using radiographic features and patient age...

  17. Neural Networks in Mobile Robot Motion

    Directory of Open Access Journals (Sweden)

    Danica Janglová

    2004-03-01

    Full Text Available This paper deals with a path planning and intelligent control of an autonomous robot which should move safely in partially structured environment. This environment may involve any number of obstacles of arbitrary shape and size; some of them are allowed to move. We describe our approach to solving the motion-planning problem in mobile robot control using neural networks-based technique. Our method of the construction of a collision-free path for moving robot among obstacles is based on two neural networks. The first neural network is used to determine the “free” space using ultrasound range finder data. The second neural network “finds” a safe direction for the next robot section of the path in the workspace while avoiding the nearest obstacles. Simulation examples of generated path with proposed techniques will be presented.

  18. Isolated Speech Recognition Using Artificial Neural Networks

    National Research Council Canada - National Science Library

    Polur, Prasad

    2001-01-01

    .... A small size vocabulary containing the words YES and NO is chosen. Spectral features using cepstral analysis are extracted per frame and imported to a feedforward neural network which uses a backpropagation with momentum training algorithm...

  19. Control of autonomous robot using neural networks

    Science.gov (United States)

    Barton, Adam; Volna, Eva

    2017-07-01

    The aim of the article is to design a method of control of an autonomous robot using artificial neural networks. The introductory part describes control issues from the perspective of autonomous robot navigation and the current mobile robots controlled by neural networks. The core of the article is the design of the controlling neural network, and generation and filtration of the training set using ART1 (Adaptive Resonance Theory). The outcome of the practical part is an assembled Lego Mindstorms EV3 robot solving the problem of avoiding obstacles in space. To verify models of an autonomous robot behavior, a set of experiments was created as well as evaluation criteria. The speed of each motor was adjusted by the controlling neural network with respect to the situation in which the robot was found.

  20. Neural Networks in Mobile Robot Motion

    Directory of Open Access Journals (Sweden)

    Danica Janglova

    2008-11-01

    Full Text Available This paper deals with a path planning and intelligent control of an autonomous robot which should move safely in partially structured environment. This environment may involve any number of obstacles of arbitrary shape and size; some of them are allowed to move. We describe our approach to solving the motion-planning problem in mobile robot control using neural networks-based technique. Our method of the construction of a collision-free path for moving robot among obstacles is based on two neural networks. The first neural network is used to determine the "free" space using ultrasound range finder data. The second neural network "finds" a safe direction for the next robot section of the path in the workspace while avoiding the nearest obstacles. Simulation examples of generated path with proposed techniques will be presented.

  1. Artificial neural networks a practical course

    CERN Document Server

    da Silva, Ivan Nunes; Andrade Flauzino, Rogerio; Liboni, Luisa Helena Bartocci; dos Reis Alves, Silas Franco

    2017-01-01

    This book provides comprehensive coverage of neural networks, their evolution, their structure, the problems they can solve, and their applications. The first half of the book looks at theoretical investigations on artificial neural networks and addresses the key architectures that are capable of implementation in various application scenarios. The second half is designed specifically for the production of solutions using artificial neural networks to solve practical problems arising from different areas of knowledge. It also describes the various implementation details that were taken into account to achieve the reported results. These aspects contribute to the maturation and improvement of experimental techniques to specify the neural network architecture that is most appropriate for a particular application scope. The book is appropriate for students in graduate and upper undergraduate courses in addition to researchers and professionals.

  2. Constructive autoassociative neural network for facial recognition.

    Directory of Open Access Journals (Sweden)

    Bruno J T Fernandes

    Full Text Available Autoassociative artificial neural networks have been used in many different computer vision applications. However, it is difficult to define the most suitable neural network architecture because this definition is based on previous knowledge and depends on the problem domain. To address this problem, we propose a constructive autoassociative neural network called CANet (Constructive Autoassociative Neural Network. CANet integrates the concepts of receptive fields and autoassociative memory in a dynamic architecture that changes the configuration of the receptive fields by adding new neurons in the hidden layer, while a pruning algorithm removes neurons from the output layer. Neurons in the CANet output layer present lateral inhibitory connections that improve the recognition rate. Experiments in face recognition and facial expression recognition show that the CANet outperforms other methods presented in the literature.

  3. Genetic Algorithm Optimized Neural Networks Ensemble as ...

    African Journals Online (AJOL)

    NJD

    Genetic Algorithm Optimized Neural Networks Ensemble as. Calibration Model for Simultaneous Spectrophotometric. Estimation of Atenolol and Losartan Potassium in Tablets. Dondeti Satyanarayana*, Kamarajan Kannan and Rajappan Manavalan. Department of Pharmacy, Annamalai University, Annamalainagar, Tamil ...

  4. Healthy human CSF promotes glial differentiation of hESC-derived neural cells while retaining spontaneous activity in existing neuronal networks

    Directory of Open Access Journals (Sweden)

    Heikki Kiiski

    2013-05-01

    The possibilities of human pluripotent stem cell-derived neural cells from the basic research tool to a treatment option in regenerative medicine have been well recognized. These cells also offer an interesting tool for in vitro models of neuronal networks to be used for drug screening and neurotoxicological studies and for patient/disease specific in vitro models. Here, as aiming to develop a reductionistic in vitro human neuronal network model, we tested whether human embryonic stem cell (hESC-derived neural cells could be cultured in human cerebrospinal fluid (CSF in order to better mimic the in vivo conditions. Our results showed that CSF altered the differentiation of hESC-derived neural cells towards glial cells at the expense of neuronal differentiation. The proliferation rate was reduced in CSF cultures. However, even though the use of CSF as the culture medium altered the glial vs. neuronal differentiation rate, the pre-existing spontaneous activity of the neuronal networks persisted throughout the study. These results suggest that it is possible to develop fully human cell and culture-based environments that can further be modified for various in vitro modeling purposes.

  5. Applications of Pulse-Coupled Neural Networks

    CERN Document Server

    Ma, Yide; Wang, Zhaobin

    2011-01-01

    "Applications of Pulse-Coupled Neural Networks" explores the fields of image processing, including image filtering, image segmentation, image fusion, image coding, image retrieval, and biometric recognition, and the role of pulse-coupled neural networks in these fields. This book is intended for researchers and graduate students in artificial intelligence, pattern recognition, electronic engineering, and computer science. Prof. Yide Ma conducts research on intelligent information processing, biomedical image processing, and embedded system development at the School of Information Sci

  6. Neural networks as models of psychopathology.

    Science.gov (United States)

    Aakerlund, L; Hemmingsen, R

    1998-04-01

    Neural network modeling is situated between neurobiology, cognitive science, and neuropsychology. The structural and functional resemblance with biological computation has made artificial neural networks (ANN) useful for exploring the relationship between neurobiology and computational performance, i.e., cognition and behavior. This review provides an introduction to the theory of ANN and how they have linked theories from neurobiology and psychopathology in schizophrenia, affective disorders, and dementia.

  7. Neural network activation during a stop-signal task discriminates cocaine-dependent from non-drug-abusing men.

    Science.gov (United States)

    Elton, Amanda; Young, Jonathan; Smitherman, Sonet; Gross, Robin E; Mletzko, Tanja; Kilts, Clinton D

    2014-05-01

    Cocaine dependence is defined by a loss of inhibitory control over drug-use behaviors, mirrored by measurable impairments in laboratory tasks of inhibitory control. The current study tested the hypothesis that deficits in multiple subprocesses of behavioral control are associated with reliable neural-processing alterations that define cocaine addiction. While undergoing functional magnetic resonance imaging (fMRI), 38 cocaine-dependent men and 27 healthy control men performed a stop-signal task of motor inhibition. An independent component analysis on fMRI time courses identified task-related neural networks attributed to motor, visual, cognitive and affective processes. The statistical associations of these components with five different stop-signal task conditions were selected for use in a linear discriminant analysis to define a classifier for cocaine addiction from a subsample of 26 cocaine-dependent men and 18 controls. Leave-one-out cross-validation accurately classified 89.5% (39/44; chance accuracy = 26/44 = 59.1%) of subjects with 84.6% (22/26) sensitivity and 94.4% (17/18) specificity. The remaining 12 cocaine-dependent and 9 control men formed an independent test sample, for which accuracy of the classifier was 81.9% (17/21; chance accuracy = 12/21 = 57.1%) with 75% (9/12) sensitivity and 88.9% (8/9) specificity. The cocaine addiction classification score was significantly correlated with a measure of impulsiveness as well as the duration of cocaine use for cocaine-dependent men. The results of this study support the ability of a pattern of multiple neural network alterations associated with inhibitory motor control to define a binary classifier for cocaine addiction. © 2012 The Authors, Addiction Biology © 2012 Society for the Study of Addiction.

  8. A neural network simulation package in CLIPS

    Science.gov (United States)

    Bhatnagar, Himanshu; Krolak, Patrick D.; Mcgee, Brenda J.; Coleman, John

    1990-01-01

    The intrinsic similarity between the firing of a rule and the firing of a neuron has been captured in this research to provide a neural network development system within an existing production system (CLIPS). A very important by-product of this research has been the emergence of an integrated technique of using rule based systems in conjunction with the neural networks to solve complex problems. The systems provides a tool kit for an integrated use of the two techniques and is also extendible to accommodate other AI techniques like the semantic networks, connectionist networks, and even the petri nets. This integrated technique can be very useful in solving complex AI problems.

  9. Stacked Heterogeneous Neural Networks for Time Series Forecasting

    Directory of Open Access Journals (Sweden)

    Florin Leon

    2010-01-01

    Full Text Available A hybrid model for time series forecasting is proposed. It is a stacked neural network, containing one normal multilayer perceptron with bipolar sigmoid activation functions, and the other with an exponential activation function in the output layer. As shown by the case studies, the proposed stacked hybrid neural model performs well on a variety of benchmark time series. The combination of weights of the two stack components that leads to optimal performance is also studied.

  10. Logarithmic learning for generalized classifier neural network.

    Science.gov (United States)

    Ozyildirim, Buse Melis; Avci, Mutlu

    2014-12-01

    Generalized classifier neural network is introduced as an efficient classifier among the others. Unless the initial smoothing parameter value is close to the optimal one, generalized classifier neural network suffers from convergence problem and requires quite a long time to converge. In this work, to overcome this problem, a logarithmic learning approach is proposed. The proposed method uses logarithmic cost function instead of squared error. Minimization of this cost function reduces the number of iterations used for reaching the minima. The proposed method is tested on 15 different data sets and performance of logarithmic learning generalized classifier neural network is compared with that of standard one. Thanks to operation range of radial basis function included by generalized classifier neural network, proposed logarithmic approach and its derivative has continuous values. This makes it possible to adopt the advantage of logarithmic fast convergence by the proposed learning method. Due to fast convergence ability of logarithmic cost function, training time is maximally decreased to 99.2%. In addition to decrease in training time, classification performance may also be improved till 60%. According to the test results, while the proposed method provides a solution for time requirement problem of generalized classifier neural network, it may also improve the classification accuracy. The proposed method can be considered as an efficient way for reducing the time requirement problem of generalized classifier neural network. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Diabetic retinopathy screening using deep neural network.

    Science.gov (United States)

    Ramachandran, Nishanthan; Hong, Sheng Chiong; Sime, Mary J; Wilson, Graham A

    2017-09-07

    There is a burgeoning interest in the use of deep neural network in diabetic retinal screening. To determine whether a deep neural network could satisfactorily detect diabetic retinopathy that requires referral to an ophthalmologist from a local diabetic retinal screening programme and an international database. Retrospective audit. Diabetic retinal photos from Otago database photographed during October 2016 (485 photos), and 1200 photos from Messidor international database. Receiver operating characteristic curve to illustrate the ability of a deep neural network to identify referable diabetic retinopathy (moderate or worse diabetic retinopathy or exudates within one disc diameter of the fovea). Area under the receiver operating characteristic curve, sensitivity and specificity. For detecting referable diabetic retinopathy, the deep neural network had an area under receiver operating characteristic curve of 0.901 (95% confidence interval 0.807-0.995), with 84.6% sensitivity and 79.7% specificity for Otago and 0.980 (95% confidence interval 0.973-0.986), with 96.0% sensitivity and 90.0% specificity for Messidor. This study has shown that a deep neural network can detect referable diabetic retinopathy with sensitivities and specificities close to or better than 80% from both an international and a domestic (New Zealand) database. We believe that deep neural networks can be integrated into community screening once they can successfully detect both diabetic retinopathy and diabetic macular oedema. © 2017 Royal Australian and New Zealand College of Ophthalmologists.

  12. Antagonistic neural networks underlying differentiated leadership roles.

    Science.gov (United States)

    Boyatzis, Richard E; Rochford, Kylie; Jack, Anthony I

    2014-01-01

    The emergence of two distinct leadership roles, the task leader and the socio-emotional leader, has been documented in the leadership literature since the 1950s. Recent research in neuroscience suggests that the division between task-oriented and socio-emotional-oriented roles derives from a fundamental feature of our neurobiology: an antagonistic relationship between two large-scale cortical networks - the task-positive network (TPN) and the default mode network (DMN). Neural activity in TPN tends to inhibit activity in the DMN, and vice versa. The TPN is important for problem solving, focusing of attention, making decisions, and control of action. The DMN plays a central role in emotional self-awareness, social cognition, and ethical decision making. It is also strongly linked to creativity and openness to new ideas. Because activation of the TPN tends to suppress activity in the DMN, an over-emphasis on task-oriented leadership may prove deleterious to social and emotional aspects of leadership. Similarly, an overemphasis on the DMN would result in difficulty focusing attention, making decisions, and solving known problems. In this paper, we will review major streams of theory and research on leadership roles in the context of recent findings from neuroscience and psychology. We conclude by suggesting that emerging research challenges the assumption that role differentiation is both natural and necessary, in particular when openness to new ideas, people, emotions, and ethical concerns are important to success.

  13. Antagonistic Neural Networks Underlying Differentiated Leadership Roles

    Directory of Open Access Journals (Sweden)

    Richard Eleftherios Boyatzis

    2014-03-01

    Full Text Available The emergence of two distinct leadership roles, the task leader and the socio-emotional leader, has been documented in the leadership literature since the 1950’s. Recent research in neuroscience suggests that the division between task oriented and socio-emotional oriented roles derives from a fundamental feature of our neurobiology: an antagonistic relationship between two large-scale cortical networks -- the Task Positive Network (TPN and the Default Mode Network (DMN. Neural activity in TPN tends to inhibit activity in the DMN, and vice versa. The TPN is important for problem solving, focusing of attention, making decisions, and control of action. The DMN plays a central role in emotional self-awareness, social cognition, and ethical decision making. It is also strongly linked to creativity and openness to new ideas. Because activation of the TPN tends to suppress activity in the DMN, an over-emphasis on task oriented leadership may prove deleterious to social and emotional aspects of leadership. Similarly, an overemphasis on the DMN would result in difficulty focusing attention, making decisions and solving known problems. In this paper, we will review major streams of theory and research on leadership roles in the context of recent findings from neuroscience and psychology. We conclude by suggesting that emerging research challenges the assumption that role differentiation is both natural and necessary, in particular when openness to new ideas, people, emotions, and ethical concerns are important to success.

  14. Antagonistic neural networks underlying differentiated leadership roles

    Science.gov (United States)

    Boyatzis, Richard E.; Rochford, Kylie; Jack, Anthony I.

    2014-01-01

    The emergence of two distinct leadership roles, the task leader and the socio-emotional leader, has been documented in the leadership literature since the 1950s. Recent research in neuroscience suggests that the division between task-oriented and socio-emotional-oriented roles derives from a fundamental feature of our neurobiology: an antagonistic relationship between two large-scale cortical networks – the task-positive network (TPN) and the default mode network (DMN). Neural activity in TPN tends to inhibit activity in the DMN, and vice versa. The TPN is important for problem solving, focusing of attention, making decisions, and control of action. The DMN plays a central role in emotional self-awareness, social cognition, and ethical decision making. It is also strongly linked to creativity and openness to new ideas. Because activation of the TPN tends to suppress activity in the DMN, an over-emphasis on task-oriented leadership may prove deleterious to social and emotional aspects of leadership. Similarly, an overemphasis on the DMN would result in difficulty focusing attention, making decisions, and solving known problems. In this paper, we will review major streams of theory and research on leadership roles in the context of recent findings from neuroscience and psychology. We conclude by suggesting that emerging research challenges the assumption that role differentiation is both natural and necessary, in particular when openness to new ideas, people, emotions, and ethical concerns are important to success. PMID:24624074

  15. Symbolic processing in neural networks

    OpenAIRE

    Neto, João Pedro; Hava T Siegelmann; Costa,J.Félix

    2003-01-01

    In this paper we show that programming languages can be translated into recurrent (analog, rational weighted) neural nets. Implementation of programming languages in neural nets turns to be not only theoretical exciting, but has also some practical implications in the recent efforts to merge symbolic and sub symbolic computation. To be of some use, it should be carried in a context of bounded resources. Herein, we show how to use resource bounds to speed up computations over neural nets, thro...

  16. Hindcasting cyclonic waves using neural networks

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Rao, S.; Chakravarty, N.V.

    the backpropagation networks with updated algorithms are used in this paper. A brief description about the working of a back propagation neural network and three updated algorithms is given below. Backpropagation learning: Backpropagation is the most widely used... algorithm for supervised learning with multi layer feed forward networks. The idea of the backpropagation learning algorithm is the repeated application of the chain rule to compute the influence of each weight in the network with respect to an arbitrary...

  17. Analytic solution of neural network with disordered lateral inhibition

    Science.gov (United States)

    Hamaguchi, Kosuke; Hatchett, J. P. L.; Okada, Masato

    2006-05-01

    The replica method has played a key role in analyzing systems with disorder, e.g., the Sherrington-Kirkpatrick (SK) model, and associative neural networks. Here we study the influence of disorder in the lateral inhibition type interactions on the cooperative and uncooperative behavior of recurrent neural networks by using the replica method. Although the interaction between neurons has a dependency on distance, our model can be solved analytically. Bifurcation analysis identifies the boundaries between paramagnetic, ferromagnetic, spin-glass, and localized phases. In the localized phase, the network shows a bump like activity, which is often used as a model of spatial working memory or columnar activity in the visual cortex. Simulation results show that disordered interactions can stabilize the drift the of bump position, which is commonly observed in conventional lateral inhibition type neural networks.

  18. Parametric Identification of Aircraft Loads: An Artificial Neural Network Approach

    Science.gov (United States)

    2016-03-30

    Undergraduate Student Paper Postgraduate Student Paper Parametric Identification of Aircraft Loads: An Artificial Neural Network Approach...monitoring, flight parameter, nonlinear modeling, Artificial Neural Network , typical loadcase. Introduction Aircraft load monitoring is an... Neural Networks (ANN), i.e. the BP network and Kohonen Clustering Network , are applied and revised by Kalman Filter and Genetic Algorithm to build

  19. Learning of N-layers neural network

    Directory of Open Access Journals (Sweden)

    Vladimír Konečný

    2005-01-01

    Full Text Available In the last decade we can observe increasing number of applications based on the Artificial Intelligence that are designed to solve problems from different areas of human activity. The reason why there is so much interest in these technologies is that the classical way of solutions does not exist or these technologies are not suitable because of their robustness. They are often used in applications like Business Intelligence that enable to obtain useful information for high-quality decision-making and to increase competitive advantage.One of the most widespread tools for the Artificial Intelligence are the artificial neural networks. Their high advantage is relative simplicity and the possibility of self-learning based on set of pattern situations.For the learning phase is the most commonly used algorithm back-propagation error (BPE. The base of BPE is the method minima of error function representing the sum of squared errors on outputs of neural net, for all patterns of the learning set. However, while performing BPE and in the first usage, we can find out that it is necessary to complete the handling of the learning factor by suitable method. The stability of the learning process and the rate of convergence depend on the selected method. In the article there are derived two functions: one function for the learning process management by the relative great error function value and the second function when the value of error function approximates to global minimum.The aim of the article is to introduce the BPE algorithm in compact matrix form for multilayer neural networks, the derivation of the learning factor handling method and the presentation of the results.

  20. Fin-and-tube condenser performance evaluation using neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Ling-Xiao [Institute of Refrigeration and Cryogenics, Shanghai Jiaotong University, Shanghai 200240 (China); Zhang, Chun-Lu [China R and D Center, Carrier Corporation, No. 3239 Shen Jiang Road, Shanghai 201206 (China)

    2010-05-15

    The paper presents neural network approach to performance evaluation of the fin-and-tube air-cooled condensers which are widely used in air-conditioning and refrigeration systems. Inputs of the neural network include refrigerant and air-flow rates, refrigerant inlet temperature and saturated temperature, and entering air dry-bulb temperature. Outputs of the neural network consist of the heating capacity and the pressure drops on both refrigerant and air sides. The multi-input multi-output (MIMO) neural network is separated into multi-input single-output (MISO) neural networks for training. Afterwards, the trained MISO neural networks are combined into a MIMO neural network, which indicates that the number of training data sets is determined by the biggest MISO neural network not the whole MIMO network. Compared with a validated first-principle model, the standard deviations of neural network models are less than 1.9%, and all errors fall into {+-}5%. (author)

  1. On sparsely connected optimal neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Beiu, V. [Los Alamos National Lab., NM (United States); Draghici, S. [Wayne State Univ., Detroit, MI (United States)

    1997-10-01

    This paper uses two different approaches to show that VLSI- and size-optimal discrete neural networks are obtained for small fan-in values. These have applications to hardware implementations of neural networks, but also reveal an intrinsic limitation of digital VLSI technology: its inability to cope with highly connected structures. The first approach is based on implementing F{sub n,m} functions. The authors show that this class of functions can be implemented in VLSI-optimal (i.e., minimizing AT{sup 2}) neural networks of small constant fan-ins. In order to estimate the area (A) and the delay (T) of such networks, the following cost functions will be used: (i) the connectivity and the number-of-bits for representing the weights and thresholds--for good estimates of the area; and (ii) the fan-ins and the length of the wires--for good approximates of the delay. The second approach is based on implementing Boolean functions for which the classical Shannon`s decomposition can be used. Such a solution has already been used to prove bounds on the size of fan-in 2 neural networks. They will generalize the result presented there to arbitrary fan-in, and prove that the size is minimized by small fan-in values. Finally, a size-optimal neural network of small constant fan-ins will be suggested for F{sub n,m} functions.

  2. Artificial neural network intelligent method for prediction

    Science.gov (United States)

    Trifonov, Roumen; Yoshinov, Radoslav; Pavlova, Galya; Tsochev, Georgi

    2017-09-01

    Accounting and financial classification and prediction problems are high challenge and researchers use different methods to solve them. Methods and instruments for short time prediction of financial operations using artificial neural network are considered. The methods, used for prediction of financial data as well as the developed forecasting system with neural network are described in the paper. The architecture of a neural network used four different technical indicators, which are based on the raw data and the current day of the week is presented. The network developed is used for forecasting movement of stock prices one day ahead and consists of an input layer, one hidden layer and an output layer. The training method is algorithm with back propagation of the error. The main advantage of the developed system is self-determination of the optimal topology of neural network, due to which it becomes flexible and more precise The proposed system with neural network is universal and can be applied to various financial instruments using only basic technical indicators as input data.

  3. Estimating Conditional Distributions by Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1998-01-01

    Neural Networks for estimating conditionaldistributions and their associated quantiles are investigated in this paper. A basic network structure is developed on the basis of kernel estimation theory, and consistency property is considered from a mild set of assumptions. A number of applications...

  4. Medical Text Classification using Convolutional Neural Networks

    OpenAIRE

    Hughes, Mark; Li, Irene; Kotoulas, Spyros; Suzumura, Toyotaro

    2017-01-01

    We present an approach to automatically classify clinical text at a sentence level. We are using deep convolutional neural networks to represent complex features. We train the network on a dataset providing a broad categorization of health information. Through a detailed evaluation, we demonstrate that our method outperforms several approaches widely used in natural language processing tasks by about 15%.

  5. Medical Text Classification Using Convolutional Neural Networks.

    Science.gov (United States)

    Hughes, Mark; Li, Irene; Kotoulas, Spyros; Suzumura, Toyotaro

    2017-01-01

    We present an approach to automatically classify clinical text at a sentence level. We are using deep convolutional neural networks to represent complex features. We train the network on a dataset providing a broad categorization of health information. Through a detailed evaluation, we demonstrate that our method outperforms several approaches widely used in natural language processing tasks by about 15%.

  6. Artificial Neural Networks and Instructional Technology.

    Science.gov (United States)

    Carlson, Patricia A.

    1991-01-01

    Artificial neural networks (ANN), part of artificial intelligence, are discussed. Such networks are fed sample cases (training sets), learn how to recognize patterns in the sample data, and use this experience in handling new cases. Two cognitive roles for ANNs (intelligent filters and spreading, associative memories) are examined. Prototypes…

  7. Visual Servoing from Deep Neural Networks

    OpenAIRE

    Bateux, Quentin; Marchand, Eric; Leitner, Jürgen; Chaumette, Francois; Corke, Peter

    2017-01-01

    International audience; We present a deep neural network-based method to perform high-precision, robust and real-time 6 DOF visual servoing. The paper describes how to create a dataset simulating various perturbations (occlusions and lighting conditions) from a single real-world image of the scene. A convolutional neural network is fine-tuned using this dataset to estimate the relative pose between two images of the same scene. The output of the network is then employed in a visual servoing c...

  8. Design of Robust Neural Network Classifiers

    DEFF Research Database (Denmark)

    Larsen, Jan; Andersen, Lars Nonboe; Hintz-Madsen, Mads

    1998-01-01

    This paper addresses a new framework for designing robust neural network classifiers. The network is optimized using the maximum a posteriori technique, i.e., the cost function is the sum of the log-likelihood and a regularization term (prior). In order to perform robust classification, we present...... a modified likelihood function which incorporates the potential risk of outliers in the data. This leads to the introduction of a new parameter, the outlier probability. Designing the neural classifier involves optimization of network weights as well as outlier probability and regularization parameters. We...

  9. Electronic device aspects of neural network memories

    Science.gov (United States)

    Lambe, J.; Moopenn, A.; Thakoor, A. P.

    1985-01-01

    The basic issues related to the electronic implementation of the neural network model (NNM) for content addressable memories are examined. A brief introduction to the principles of the NNM is followed by an analysis of the information storage of the neural network in the form of a binary connection matrix and the recall capability of such matrix memories based on a hardware simulation study. In addition, materials and device architecture issues involved in the future realization of such networks in VLSI-compatible ultrahigh-density memories are considered. A possible space application of such devices would be in the area of large-scale information storage without mechanical devices.

  10. A quantum-implementable neural network model

    Science.gov (United States)

    Chen, Jialin; Wang, Lingli; Charbon, Edoardo

    2017-10-01

    A quantum-implementable neural network, namely quantum probability neural network (QPNN) model, is proposed in this paper. QPNN can use quantum parallelism to trace all possible network states to improve the result. Due to its unique quantum nature, this model is robust to several quantum noises under certain conditions, which can be efficiently implemented by the qubus quantum computer. Another advantage is that QPNN can be used as memory to retrieve the most relevant data and even to generate new data. The MATLAB experimental results of Iris data classification and MNIST handwriting recognition show that much less neuron resources are required in QPNN to obtain a good result than the classical feedforward neural network. The proposed QPNN model indicates that quantum effects are useful for real-life classification tasks.

  11. An overview on development of neural network technology

    Science.gov (United States)

    Lin, Chun-Shin

    1993-01-01

    The study has been to obtain a bird's-eye view of the current neural network technology and the neural network research activities in NASA. The purpose was two fold. One was to provide a reference document for NASA researchers who want to apply neural network techniques to solve their problems. Another one was to report out survey results regarding NASA research activities and provide a view on what NASA is doing, what potential difficulty exists and what NASA can/should do. In a ten week study period, we interviewed ten neural network researchers in the Langley Research Center and sent out 36 survey forms to researchers at the Johnson Space Center, Lewis Research Center, Ames Research Center and Jet Propulsion Laboratory. We also sent out 60 similar forms to educators and corporation researchers to collect general opinions regarding this field. Twenty-eight survey forms, 11 from NASA researchers and 17 from outside, were returned. Survey results were reported in our final report. In the final report, we first provided an overview on the neural network technology. We reviewed ten neural network structures, discussed the applications in five major areas, and compared the analog, digital and hybrid electronic implementation of neural networks. In the second part, we summarized known NASA neural network research studies and reported the results of the questionnaire survey. Survey results show that most studies are still in the development and feasibility study stage. We compared the techniques, application areas, researchers' opinions on this technology, and many aspects between NASA and non-NASA groups. We also summarized their opinions on difficulties encountered. Applications are considered the top research priority by most researchers. Hardware development and learning algorithm improvement are the next. The lack of financial and management support is among the difficulties in research study. All researchers agree that the use of neural networks could result in

  12. Neural network optimization, components, and design selection

    Science.gov (United States)

    Weller, Scott W.

    1990-07-01

    Neural Networks are part of a revived technology which has received a lot of hype in recent years. As is apt to happen in any hyped technology, jargon and predictions make its assimilation and application difficult. Nevertheless, Neural Networks have found use in a number of areas, working on non-trivial and noncontrived problems. For example, one net has been trained to "read", translating English text into phoneme sequences. Other applications of Neural Networks include data base manipulation and the solving of muting and classification types of optimization problems. Neural Networks are constructed from neurons, which in electronics or software attempt to model but are not constrained by the real thing, i.e., neurons in our gray matter. Neurons are simple processing units connected to many other neurons over pathways which modify the incoming signals. A single synthetic neuron typically sums its weighted inputs, runs this sum through a non-linear function, and produces an output. In the brain, neurons are connected in a complex topology: in hardware/software the topology is typically much simpler, with neurons lying side by side, forming layers of neurons which connect to the layer of neurons which receive their outputs. This simplistic model is much easier to construct than the real thing, and yet can solve real problems. The information in a network, or its "memory", is completely contained in the weights on the connections from one neuron to another. Establishing these weights is called "training" the network. Some networks are trained by design -- once constructed no further learning takes place. Other types of networks require iterative training once wired up, but are not trainable once taught Still other types of networks can continue to learn after initial construction. The main benefit to using Neural Networks is their ability to work with conflicting or incomplete ("fuzzy") data sets. This ability and its usefulness will become evident in the following

  13. Neural networks to formulate special fats

    Directory of Open Access Journals (Sweden)

    Garcia, R. K.

    2012-09-01

    Full Text Available Neural networks are a branch of artificial intelligence based on the structure and development of biological systems, having as its main characteristic the ability to learn and generalize knowledge. They are used for solving complex problems for which traditional computing systems have a low efficiency. To date, applications have been proposed for different sectors and activities. In the area of fats and oils, the use of neural networks has focused mainly on two issues: the detection of adulteration and the development of fatty products. The formulation of fats for specific uses is the classic case of a complex problem where an expert or group of experts defines the proportions of each base, which, when mixed, provide the specifications for the desired product. Some conventional computer systems are currently available to assist the experts; however, these systems have some shortcomings. This article describes in detail a system for formulating fatty products, shortenings or special fats, from three or more components by using neural networks (MIX. All stages of development, including design, construction, training, evaluation, and operation of the network will be outlined.

    Las redes neuronales son una rama de la inteligencia artificial basadas en la estructura y funcionamiento de sistemas biológicos, teniendo como principal característica la capacidad de aprender y generalizar conocimiento. Estas son utilizadas en la resolución de problemas complejos, en los cuales los sistemas computacionales tradicionales presentan una eficiencia baja. Hasta la fecha, han sido propuestas aplicaciones para los más diversos sectores y actividades. En el área de grasas y aceites, la utilización de redes neuronales se ha concentrado principalmente en dos asuntos: la detección de adulteraciones y la formulación de productos grasos. La formulación de grasas para uso específico es el caso clásico de problema complejo donde un experto o grupo de

  14. Neutron spectrometry with artificial neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.; Rodriguez, J.M.; Mercado S, G.A. [Universidad Autonoma de Zacatecas, A.P. 336, 98000 Zacatecas (Mexico); Iniguez de la Torre Bayo, M.P. [Universidad de Valladolid, Valladolid (Spain); Barquero, R. [Hospital Universitario Rio Hortega, Valladolid (Spain); Arteaga A, T. [Envases de Zacatecas, S.A. de C.V., Zacatecas (Mexico)]. e-mail: rvega@cantera.reduaz.mx

    2005-07-01

    An artificial neural network has been designed to obtain the neutron spectra from the Bonner spheres spectrometer's count rates. The neural network was trained using 129 neutron spectra. These include isotopic neutron sources; reference and operational spectra from accelerators and nuclear reactors, spectra from mathematical functions as well as few energy groups and monoenergetic spectra. The spectra were transformed from lethargy to energy distribution and were re-bin ned to 31 energy groups using the MCNP 4C code. Re-binned spectra and UTA4 response matrix were used to calculate the expected count rates in Bonner spheres spectrometer. These count rates were used as input and the respective spectrum was used as output during neural network training. After training the network was tested with the Bonner spheres count rates produced by a set of neutron spectra. This set contains data used during network training as well as data not used. Training and testing was carried out in the Mat lab program. To verify the network unfolding performance the original and unfolded spectra were compared using the {chi}{sup 2}-test and the total fluence ratios. The use of Artificial Neural Networks to unfold neutron spectra in neutron spectrometry is an alternative procedure that overcomes the drawbacks associated in this ill-conditioned problem. (Author)

  15. Neutron spectrometry using artificial neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Vega-Carrillo, Hector Rene [Unidad Academica de Estudios Nucleares, Universidad Autonoma de Zacatecas, Apdo. Postal 336, 98000 Zacatecas, Zac. (Mexico)]|[Unidad Academica de Ing. Electrica, Universidad Autonoma de Zacatecas, Apdo. Postal 336, 98000 Zacatecas, Zac. (Mexico)]|[Unidad Academica de Matematicas, Universidad Autonoma de Zacatecas, Apdo. Postal 336, 98000 Zacatecas, Zac. (Mexico)]. E-mail: fermineutron@yahoo.com; Martin Hernandez-Davila, Victor [Unidad Academica de Estudios Nucleares, Universidad Autonoma de Zacatecas, Apdo. Postal 336, 98000 Zacatecas, Zac. (Mexico)]|[Unidad Academica de Ing. Electrica, Universidad Autonoma de Zacatecas, Apdo. Postal 336, 98000 Zacatecas, Zac. (Mexico); Manzanares-Acuna, Eduardo [Unidad Academica de Estudios Nucleares, Universidad Autonoma de Zacatecas, Apdo. Postal 336, 98000 Zacatecas, Zac. (Mexico); Mercado Sanchez, Gema A. [Unidad Academica de Matematicas, Universidad Autonoma de Zacatecas, Apdo. Postal 336, 98000 Zacatecas, Zac. (Mexico); Pilar Iniguez de la Torre, Maria [Depto. Fisica Teorica, Molecular y Nuclear, Universidad de Valladolid, Valladolid (Spain); Barquero, Raquel [Hospital Universitario Rio Hortega, Valladolid (Spain); Palacios, Francisco; Mendez Villafane, Roberto [Depto. Fisica Teorica, Molecular y Nuclear, Universidad de Valladolid, Valladolid (Spain)]|[Universidad Europea Miguel de Cervantes, C. Padre Julio Chevalier No. 2, 47012 Valladolid (Spain); Arteaga Arteaga, Tarcicio [Unidad Academica de Estudios Nucleares, Universidad Autonoma de Zacatecas, Apdo. Postal 336, 98000 Zacatecas, Zac. (Mexico)]|[Envases de Zacatecas, SA de CV, Parque Industrial de Calera de Victor Rosales, Zac. (Mexico); Manuel Ortiz Rodriguez, Jose [Unidad Academica de Estudios Nucleares, Universidad Autonoma de Zacatecas, Apdo. Postal 336, 98000 Zacatecas, Zac. (Mexico)]|[Unidad Academica de Ing. Electrica, Universidad Autonoma de Zacatecas, Apdo. Postal 336, 98000 Zacatecas, Zac. (Mexico)

    2006-04-15

    An artificial neural network has been designed to obtain neutron spectra from Bonner spheres spectrometer count rates. The neural network was trained using 129 neutron spectra. These include spectra from isotopic neutron sources; reference and operational spectra from accelerators and nuclear reactors, spectra based on mathematical functions as well as few energy groups and monoenergetic spectra. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. The re-binned spectra and the UTA4 response matrix were used to calculate the expected count rates in Bonner spheres spectrometer. These count rates were used as input and their respective spectra were used as output during the neural network training. After training, the network was tested with the Bonner spheres count rates produced by folding a set of neutron spectra with the response matrix. This set contains data used during network training as well as data not used. Training and testing was carried out using the Matlab{sup (R)} program. To verify the network unfolding performance, the original and unfolded spectra were compared using the root mean square error. The use of artificial neural networks to unfold neutron spectra in neutron spectrometry is an alternative procedure that overcomes the drawbacks associated with this ill-conditioned problem.

  16. Representations in neural network based empirical potentials

    Science.gov (United States)

    Cubuk, Ekin D.; Malone, Brad D.; Onat, Berk; Waterland, Amos; Kaxiras, Efthimios

    2017-07-01

    Many structural and mechanical properties of crystals, glasses, and biological macromolecules can be modeled from the local interactions between atoms. These interactions ultimately derive from the quantum nature of electrons, which can be prohibitively expensive to simulate. Machine learning has the potential to revolutionize materials modeling due to its ability to efficiently approximate complex functions. For example, neural networks can be trained to reproduce results of density functional theory calculations at a much lower cost. However, how neural networks reach their predictions is not well understood, which has led to them being used as a "black box" tool. This lack of understanding is not desirable especially for applications of neural networks in scientific inquiry. We argue that machine learning models trained on physical systems can be used as more than just approximations since they had to "learn" physical concepts in order to reproduce the labels they were trained on. We use dimensionality reduction techniques to study in detail the representation of silicon atoms at different stages in a neural network, which provides insight into how a neural network learns to model atomic interactions.

  17. Community structure of complex networks based on continuous neural network

    Science.gov (United States)

    Dai, Ting-ting; Shan, Chang-ji; Dong, Yan-shou

    2017-09-01

    As a new subject, the research of complex networks has attracted the attention of researchers from different disciplines. Community structure is one of the key structures of complex networks, so it is a very important task to analyze the community structure of complex networks accurately. In this paper, we study the problem of extracting the community structure of complex networks, and propose a continuous neural network (CNN) algorithm. It is proved that for any given initial value, the continuous neural network algorithm converges to the eigenvector of the maximum eigenvalue of the network modularity matrix. Therefore, according to the stability of the evolution of the network symbol will be able to get two community structure.

  18. Flexible body control using neural networks

    Science.gov (United States)

    Mccullough, Claire L.

    1992-01-01

    Progress is reported on the control of Control Structures Interaction suitcase demonstrator (a flexible structure) using neural networks and fuzzy logic. It is concluded that while control by neural nets alone (i.e., allowing the net to design a controller with no human intervention) has yielded less than optimal results, the neural net trained to emulate the existing fuzzy logic controller does produce acceptible system responses for the initial conditions examined. Also, a neural net was found to be very successful in performing the emulation step necessary for the anticipatory fuzzy controller for the CSI suitcase demonstrator. The fuzzy neural hybrid, which exhibits good robustness and noise rejection properties, shows promise as a controller for practical flexible systems, and should be further evaluated.

  19. Identification and Position Control of Marine Helm using Artificial Neural Network Neural Network

    Directory of Open Access Journals (Sweden)

    Hui ZHU

    2008-02-01

    Full Text Available If nonlinearities such as saturation of the amplifier gain and motor torque, gear backlash, and shaft compliances- just to name a few - are considered in the position control system of marine helm, traditional control methods are no longer sufficient to be used to improve the performance of the system. In this paper an alternative approach to traditional control methods - a neural network reference controller - is proposed to establish an adaptive control of the position of the marine helm to achieve the controlled variable at the command position. This neural network controller comprises of two neural networks. One is the plant model network used to identify the nonlinear system and the other the controller network used to control the output to follow the reference model. The experimental results demonstrate that this adaptive neural network reference controller has much better control performance than is obtained with traditional controllers.

  20. Neural networks in support of manned space

    Science.gov (United States)

    Werbos, Paul J.

    1989-01-01

    Many lobbyists in Washington have argued that artificial intelligence (AI) is an alternative to manned space activity. In actuality, this is the opposite of the truth, especially as regards artificial neural networks (ANNs), that form of AI which has the greatest hope of mimicking human abilities in learning, ability to interface with sensors and actuators, flexibility and balanced judgement. ANNs and their relation to expert systems (the more traditional form of AI), and the limitations of both technologies are briefly reviewed. A Few highlights of recent work on ANNs, including an NSF-sponsored workshop on ANNs for control applications are given. Current thinking on ANNs for use in certain key areas (the National Aerospace Plane, teleoperation, the control of large structures, fault diagnostics, and docking) which may be crucial to the long term future of man in space is discussed.

  1. Biologically plausible learning in recurrent neural networks reproduces neural dynamics observed during cognitive tasks.

    Science.gov (United States)

    Miconi, Thomas

    2017-02-23

    Neural activity during cognitive tasks exhibits complex dynamics that flexibly encode task-relevant variables. Chaotic recurrent networks, which spontaneously generate rich dynamics, have been proposed as a model of cortical computation during cognitive tasks. However, existing methods for training these networks are either biologically implausible, and/or require a continuous, real-time error signal to guide learning. Here we show that a biologically plausible learning rule can train such recurrent networks, guided solely by delayed, phasic rewards at the end of each trial. Networks endowed with this learning rule can successfully learn nontrivial tasks requiring flexible (context-dependent) associations, memory maintenance, nonlinear mixed selectivities, and coordination among multiple outputs. The resulting networks replicate complex dynamics previously observed in animal cortex, such as dynamic encoding of task features and selective integration of sensory inputs. We conclude that recurrent neural networks offer a plausible model of cortical dynamics during both learning and performance of flexible behavior.

  2. Training Deep Spiking Neural Networks Using Backpropagation.

    Science.gov (United States)

    Lee, Jun Haeng; Delbruck, Tobi; Pfeiffer, Michael

    2016-01-01

    Deep spiking neural networks (SNNs) hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise. This enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membrane potentials. Compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statistics of spikes more precisely. We evaluate the proposed framework on artificially generated events from the original MNIST handwritten digit benchmark, and also on the N-MNIST benchmark recorded with an event-based dynamic vision sensor, in which the proposed method reduces the error rate by a factor of more than three compared to the best previous SNN, and also achieves a higher accuracy than a conventional convolutional neural network (CNN) trained and tested on the same data. We demonstrate in the context of the MNIST task that thanks to their event-driven operation, deep SNNs (both fully connected and convolutional) trained with our method achieve accuracy equivalent with conventional neural networks. In the N-MNIST example, equivalent accuracy is achieved with about five times fewer computational operations.

  3. Memory-optimal neural network approximation

    Science.gov (United States)

    Bölcskei, Helmut; Grohs, Philipp; Kutyniok, Gitta; Petersen, Philipp

    2017-08-01

    We summarize the main results of a recent theory-developed by the authors-establishing fundamental lower bounds on the connectivity and memory requirements of deep neural networks as a function of the complexity of the function class to be approximated by the network. These bounds are shown to be achievable. Specifically, all function classes that are optimally approximated by a general class of representation systems-so-called affine systems-can be approximated by deep neural networks with minimal connectivity and memory requirements. Affine systems encompass a wealth of representation systems from applied harmonic analysis such as wavelets, shearlets, ridgelets, α-shearlets, and more generally α-molecules. This result elucidates a remarkable universality property of deep neural networks and shows that they achieve the optimum approximation properties of all affine systems combined. Finally, we present numerical experiments demonstrating that the standard stochastic gradient descent algorithm generates deep neural networks which provide close-to-optimal approximation rates at minimal connectivity. Moreover, stochastic gradient descent is found to actually learn approximations that are sparse in the representation system optimally sparsifying the function class the network is trained on.

  4. Neural networks for sign language translation

    Science.gov (United States)

    Wilson, Beth J.; Anspach, Gretel

    1993-09-01

    A neural network is used to extract relevant features of sign language from video images of a person communicating in American Sign Language or Signed English. The key features are hand motion, hand location with respect to the body, and handshape. A modular hybrid design is under way to apply various techniques, including neural networks, in the development of a translation system that will facilitate communication between deaf and hearing people. One of the neural networks described here is used to classify video images of handshapes into their linguistic counterpart in American Sign Language. The video image is preprocessed to yield Fourier descriptors that encode the shape of the hand silhouette. These descriptors are then used as inputs to a neural network that classifies their shapes. The network is trained with various examples from different signers and is tested with new images from new signers. The results have shown that for coarse handshape classes, the network is invariant to the type of camera used to film the various signers and to the segmentation technique.

  5. Equivalence of Conventional and Modified Network of Generalized Neural Elements

    Directory of Open Access Journals (Sweden)

    E. V. Konovalov

    2016-01-01

    Full Text Available The article is devoted to the analysis of neural networks consisting of generalized neural elements. The first part of the article proposes a new neural network model — a modified network of generalized neural elements (MGNE-network. This network developes the model of generalized neural element, whose formal description contains some flaws. In the model of the MGNE-network these drawbacks are overcome. A neural network is introduced all at once, without preliminary description of the model of a single neural element and method of such elements interaction. The description of neural network mathematical model is simplified and makes it relatively easy to construct on its basis a simulation model to conduct numerical experiments. The model of the MGNE-network is universal, uniting properties of networks consisting of neurons-oscillators and neurons-detectors. In the second part of the article we prove the equivalence of the dynamics of the two considered neural networks: the network, consisting of classical generalized neural elements, and MGNE-network. We introduce the definition of equivalence in the functioning of the generalized neural element and the MGNE-network consisting of a single element. Then we introduce the definition of the equivalence of the dynamics of the two neural networks in general. It is determined the correlation of different parameters of the two considered neural network models. We discuss the issue of matching the initial conditions of the two considered neural network models. We prove the theorem about the equivalence of the dynamics of the two considered neural networks. This theorem allows us to apply all previously obtained results for the networks, consisting of classical generalized neural elements, to the MGNE-network.

  6. Piecewise convexity of artificial neural networks.

    Science.gov (United States)

    Rister, Blaine; Rubin, Daniel L

    2017-10-01

    Although artificial neural networks have shown great promise in applications including computer vision and speech recognition, there remains considerable practical and theoretical difficulty in optimizing their parameters. The seemingly unreasonable success of gradient descent methods in minimizing these non-convex functions remains poorly understood. In this work we offer some theoretical guarantees for networks with piecewise affine activation functions, which have in recent years become the norm. We prove three main results. First, that the network is piecewise convex as a function of the input data. Second, that the network, considered as a function of the parameters in a single layer, all others held constant, is again piecewise convex. Third, that the network as a function of all its parameters is piecewise multi-convex, a generalization of biconvexity. From here we characterize the local minima and stationary points of the training objective, showing that they minimize the objective on certain subsets of the parameter space. We then analyze the performance of two optimization algorithms on multi-convex problems: gradient descent, and a method which repeatedly solves a number of convex sub-problems. We prove necessary convergence conditions for the first algorithm and both necessary and sufficient conditions for the second, after introducing regularization to the objective. Finally, we remark on the remaining difficulty of the global optimization problem. Under the squared error objective, we show that by varying the training data, a single rectifier neuron admits local minima arbitrarily far apart, both in objective value and parameter space. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Neural networks and particle physics

    CERN Document Server

    Peterson, Carsten

    1993-01-01

    1. Introduction : Structure of the Central Nervous System Generics2. Feed-forward networks, Perceptions, Function approximators3. Self-organisation, Feature Maps4. Feed-back Networks, The Hopfield model, Optimization problems, Feed-back, Networks, Deformable templates, Graph bisection

  8. Cotton genotypes selection through artificial neural networks.

    Science.gov (United States)

    Júnior, E G Silva; Cardoso, D B O; Reis, M C; Nascimento, A F O; Bortolin, D I; Martins, M R; Sousa, L B

    2017-09-27

    Breeding programs currently use statistical analysis to assist in the identification of superior genotypes at various stages of a cultivar's development. Differently from these analyses, the computational intelligence approach has been little explored in genetic improvement of cotton. Thus, this study was carried out with the objective of presenting the use of artificial neural networks as auxiliary tools in the improvement of the cotton to improve fiber quality. To demonstrate the applicability of this approach, this research was carried out using the evaluation data of 40 genotypes. In order to classify the genotypes for fiber quality, the artificial neural networks were trained with replicate data of 20 genotypes of cotton evaluated in the harvests of 2013/14 and 2014/15, regarding fiber length, uniformity of length, fiber strength, micronaire index, elongation, short fiber index, maturity index, reflectance degree, and fiber quality index. This quality index was estimated by means of a weighted average on the determined score (1 to 5) of each characteristic of the HVI evaluated, according to its industry standards. The artificial neural networks presented a high capacity of correct classification of the 20 selected genotypes based on the fiber quality index, so that when using fiber length associated with the short fiber index, fiber maturation, and micronaire index, the artificial neural networks presented better results than using only fiber length and previous associations. It was also observed that to submit data of means of new genotypes to the neural networks trained with data of repetition, provides better results of classification of the genotypes. When observing the results obtained in the present study, it was verified that the artificial neural networks present great potential to be used in the different stages of a genetic improvement program of the cotton, aiming at the improvement of the fiber quality of the future cultivars.

  9. Neural network approaches for noisy language modeling.

    Science.gov (United States)

    Li, Jun; Ouazzane, Karim; Kazemian, Hassan B; Afzal, Muhammad Sajid

    2013-11-01

    Text entry from people is not only grammatical and distinct, but also noisy. For example, a user's typing stream contains all the information about the user's interaction with computer using a QWERTY keyboard, which may include the user's typing mistakes as well as specific vocabulary, typing habit, and typing performance. In particular, these features are obvious in disabled users' typing streams. This paper proposes a new concept called noisy language modeling by further developing information theory and applies neural networks to one of its specific application-typing stream. This paper experimentally uses a neural network approach to analyze the disabled users' typing streams both in general and specific ways to identify their typing behaviors and subsequently, to make typing predictions and typing corrections. In this paper, a focused time-delay neural network (FTDNN) language model, a time gap model, a prediction model based on time gap, and a probabilistic neural network model (PNN) are developed. A 38% first hitting rate (HR) and a 53% first three HR in symbol prediction are obtained based on the analysis of a user's typing history through the FTDNN language modeling, while the modeling results using the time gap prediction model and the PNN model demonstrate that the correction rates lie predominantly in between 65% and 90% with the current testing samples, and 70% of all test scores above basic correction rates, respectively. The modeling process demonstrates that a neural network is a suitable and robust language modeling tool to analyze the noisy language stream. The research also paves the way for practical application development in areas such as informational analysis, text prediction, and error correction by providing a theoretical basis of neural network approaches for noisy language modeling.

  10. Slow diffusive dynamics in a chaotic balanced neural network.

    Science.gov (United States)

    Shaham, Nimrod; Burak, Yoram

    2017-05-01

    It has been proposed that neural noise in the cortex arises from chaotic dynamics in the balanced state: in this model of cortical dynamics, the excitatory and inhibitory inputs to each neuron approximately cancel, and activity is driven by fluctuations of the synaptic inputs around their mean. It remains unclear whether neural networks in the balanced state can perform tasks that are highly sensitive to noise, such as storage of continuous parameters in working memory, while also accounting for the irregular behavior of single neurons. Here we show that continuous parameter working memory can be maintained in the balanced state, in a neural circuit with a simple network architecture. We show analytically that in the limit of an infinite network, the dynamics generated by this architecture are characterized by a continuous set of steady balanced states, allowing for the indefinite storage of a continuous parameter. In finite networks, we show that the chaotic noise drives diffusive motion along the approximate attractor, which gradually degrades the stored memory. We analyze the dynamics and show that the slow diffusive motion induces slowly decaying temporal cross correlations in the activity, which differ substantially from those previously described in the balanced state. We calculate the diffusivity, and show that it is inversely proportional to the system size. For large enough (but realistic) neural population sizes, and with suitable tuning of the network connections, the proposed balanced network can sustain continuous parameter values in memory over time scales larger by several orders of magnitude than the single neuron time scale.

  11. Modeling and optimization by particle swarm embedded neural network for adsorption of zinc (II) by palm kernel shell based activated carbon from aqueous environment.

    Science.gov (United States)

    Karri, Rama Rao; Sahu, J N

    2018-01-15

    Zn (II) is one the common pollutant among heavy metals found in industrial effluents. Removal of pollutant from industrial effluents can be accomplished by various techniques, out of which adsorption was found to be an efficient method. Applications of adsorption limits itself due to high cost of adsorbent. In this regard, a low cost adsorbent produced from palm oil kernel shell based agricultural waste is examined for its efficiency to remove Zn (II) from waste water and aqueous solution. The influence of independent process variables like initial concentration, pH, residence time, activated carbon (AC) dosage and process temperature on the removal of Zn (II) by palm kernel shell based AC from batch adsorption process are studied systematically. Based on the design of experimental matrix, 50 experimental runs are performed with each process variable in the experimental range. The optimal values of process variables to achieve maximum removal efficiency is studied using response surface methodology (RSM) and artificial neural network (ANN) approaches. A quadratic model, which consists of first order and second order degree regressive model is developed using the analysis of variance and RSM - CCD framework. The particle swarm optimization which is a meta-heuristic optimization is embedded on the ANN architecture to optimize the search space of neural network. The optimized trained neural network well depicts the testing data and validation data with R2 equal to 0.9106 and 0.9279 respectively. The outcomes indicates that the superiority of ANN-PSO based model predictions over the quadratic model predictions provided by RSM. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Neural network approaches to dynamic collision-free trajectory generation.

    Science.gov (United States)

    Yang, S X; Meng, M

    2001-01-01

    In this paper, dynamic collision-free trajectory generation in a nonstationary environment is studied using biologically inspired neural network approaches. The proposed neural network is topologically organized, where the dynamics of each neuron is characterized by a shunting equation or an additive equation. The state space of the neural network can be either the Cartesian workspace or the joint space of multi-joint robot manipulators. There are only local lateral connections among neurons. The real-time optimal trajectory is generated through the dynamic activity landscape of the neural network without explicitly searching over the free space nor the collision paths, without explicitly optimizing any global cost functions, without any prior knowledge of the dynamic environment, and without any learning procedures. Therefore the model algorithm is computationally efficient. The stability of the neural network system is guaranteed by the existence of a Lyapunov function candidate. In addition, this model is not very sensitive to the model parameters. Several model variations are presented and the differences are discussed. As examples, the proposed models are applied to generate collision-free trajectories for a mobile robot to solve a maze-type of problem, to avoid concave U-shaped obstacles, to track a moving target and at the same to avoid varying obstacles, and to generate a trajectory for a two-link planar robot with two targets. The effectiveness and efficiency of the proposed approaches are demonstrated through simulation and comparison studies.

  13. Artificial neural network in cosmic landscape

    Science.gov (United States)

    Liu, Junyu

    2017-12-01

    In this paper we propose that artificial neural network, the basis of machine learning, is useful to generate the inflationary landscape from a cosmological point of view. Traditional numerical simulations of a global cosmic landscape typically need an exponential complexity when the number of fields is large. However, a basic application of artificial neural network could solve the problem based on the universal approximation theorem of the multilayer perceptron. A toy model in inflation with multiple light fields is investigated numerically as an example of such an application.

  14. Top tagging with deep neural networks [Vidyo

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Recent literature on deep neural networks for top tagging has focussed on image based techniques or multivariate approaches using high level jet substructure variables. Here, we take a sequential approach to this task by using anordered sequence of energy deposits as training inputs. Unlike previous approaches, this strategy does not result in a loss of information during pixelization or the calculation of high level features. We also propose new preprocessing methods that do not alter key physical quantities such as jet mass. We compare the performance of this approach to standard tagging techniques and present results evaluating the robustness of the neural network to pileup.

  15. Automatic identification of species with neural networks.

    Science.gov (United States)

    Hernández-Serna, Andrés; Jiménez-Segura, Luz Fernanda

    2014-01-01

    A new automatic identification system using photographic images has been designed to recognize fish, plant, and butterfly species from Europe and South America. The automatic classification system integrates multiple image processing tools to extract the geometry, morphology, and texture of the images. Artificial neural networks (ANNs) were used as the pattern recognition method. We tested a data set that included 740 species and 11,198 individuals. Our results show that the system performed with high accuracy, reaching 91.65% of true positive fish identifications, 92.87% of plants and 93.25% of butterflies. Our results highlight how the neural networks are complementary to species identification.

  16. Automatic identification of species with neural networks

    Directory of Open Access Journals (Sweden)

    Andrés Hernández-Serna

    2014-11-01

    Full Text Available A new automatic identification system using photographic images has been designed to recognize fish, plant, and butterfly species from Europe and South America. The automatic classification system integrates multiple image processing tools to extract the geometry, morphology, and texture of the images. Artificial neural networks (ANNs were used as the pattern recognition method. We tested a data set that included 740 species and 11,198 individuals. Our results show that the system performed with high accuracy, reaching 91.65% of true positive fish identifications, 92.87% of plants and 93.25% of butterflies. Our results highlight how the neural networks are complementary to species identification.

  17. Pulse image recognition using fuzzy neural network.

    Science.gov (United States)

    Xu, L S; Meng, Max Q -H; Wang, K Q

    2007-01-01

    The automatic recognition of pulse images is the key in the research of computerized pulse diagnosis. In order to automatically differentiate the pulse patterns by using small samples, a fuzzy neural network to classify pulse images based on the knowledge of experts in traditional Chinese pulse diagnosis was designed. The designed classifier can make hard decision and soft decision for identifying 18 patterns of pulse images at the accuracy of 91%, which is better than the results that achieved by back-propagation neural network.

  18. Assessing Landslide Hazard Using Artificial Neural Network

    DEFF Research Database (Denmark)

    Farrokhzad, Farzad; Choobbasti, Asskar Janalizadeh; Barari, Amin

    2011-01-01

    failure" which is main concentration of the current research and "liquefaction failure". Shear failures along shear planes occur when the shear stress along the sliding surfaces exceed the effective shear strength. These slides have been referred to as landslide. An expert system based on artificial...... neural network has been developed for use in the stability evaluation of slopes under various geological conditions and engineering requirements. The Artificial neural network model of this research uses slope characteristics as input and leads to the output in form of the probability of failure...

  19. Neural networks advances and applications 2

    CERN Document Server

    Gelenbe, E

    1992-01-01

    The present volume is a natural follow-up to Neural Networks: Advances and Applications which appeared one year previously. As the title indicates, it combines the presentation of recent methodological results concerning computational models and results inspired by neural networks, and of well-documented applications which illustrate the use of such models in the solution of difficult problems. The volume is balanced with respect to these two orientations: it contains six papers concerning methodological developments and five papers concerning applications and examples illustrating the theoret

  20. Human Face Recognition Using Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Răzvan-Daniel Albu

    2009-10-01

    Full Text Available In this paper, I present a novel hybrid face recognition approach based on a convolutional neural architecture, designed to robustly detect highly variable face patterns. The convolutional network extracts successively larger features in a hierarchical set of layers. With the weights of the trained neural networks there are created kernel windows used for feature extraction in a 3-stage algorithm. I present experimental results illustrating the efficiency of the proposed approach. I use a database of 796 images of 159 individuals from Reims University which contains quite a high degree of variability in expression, pose, and facial details.

  1. SAR ATR Based on Convolutional Neural Network

    Directory of Open Access Journals (Sweden)

    Tian Zhuangzhuang

    2016-06-01

    Full Text Available This study presents a new method of Synthetic Aperture Radar (SAR image target recognition based on a convolutional neural network. First, we introduce a class separability measure into the cost function to improve this network’s ability to distinguish between categories. Then, we extract SAR image features using the improved convolutional neural network and classify these features using a support vector machine. Experimental results using moving and stationary target acquisition and recognition SAR datasets prove the validity of this method.

  2. Neural Network Based Active Disturbance Rejection Control of a Novel Electrohydraulic Servo System for Simultaneously Balancing and Positioning by Isoactuation Configuration

    Directory of Open Access Journals (Sweden)

    Qiang Gao

    2016-01-01

    Full Text Available To satisfy the lightweight requirements of large pipe weapons, a novel electrohydraulic servo (EHS system where the hydraulic cylinder possesses three cavities is developed and investigated in the present study. In the EHS system, the balancing cavity of the EHS is especially designed for active compensation for the unbalancing force of the system, whereas the two driving cavities are employed for positioning and disturbance rejection of the large pipe. Aiming at simultaneously balancing and positioning of the EHS system, a novel neural network based active disturbance rejection control (NNADRC strategy is developed. In the NNADRC, the radial basis function (RBF neural network is employed for online updating of parameters of the extended state observer (ESO. Thereby, the nonlinear behavior and external disturbance of the system can be accurately estimated and compensated in real time. The efficiency and superiority of the system are critically investigated by conducting numerical simulations, showing that much higher steady accuracy as well as system robustness is achieved when comparing with conventional ADRC control system. It indicates that the NNADRC is a very promising technique for achieving fast, stable, smooth, and accurate control of the novel EHS system.

  3. Exploiting network redundancy for low-cost neural network realizations.

    NARCIS (Netherlands)

    Keegstra, H; Jansen, WJ; Nijhuis, JAG; Spaanenburg, L; Stevens, H; Udding, JT

    1996-01-01

    A method is presented to optimize a trained neural network for physical realization styles. Target architectures are embedded microcontrollers or standard cell based ASIC designs. The approach exploits the redundancy in the network, required for successful training, to replace the synaptic weighting

  4. Removing Epistemological Bias From Empirical Observation of Neural Networks

    OpenAIRE

    Waldron, Ronan

    1994-01-01

    Also in Proceedings of the International Joint Conference on Neural Networks, Nagoya, Japan. This paper addresses the application of neural network research to a theory of autonomous systems. Neural networks, while enjoying considerable success in autonomous systems applications, have failed to provide a firm theoretical underpinning to neural systems embedded in their natural ecological context. This paper proposes a stochastic formulation of such an embedding. A neural sys...

  5. Numerical analysis of modeling based on improved Elman neural network.

    Science.gov (United States)

    Jie, Shao; Li, Wang; WeiSong, Zhao; YaQin, Zhong; Malekian, Reza

    2014-01-01

    A modeling based on the improved Elman neural network (IENN) is proposed to analyze the nonlinear circuits with the memory effect. The hidden layer neurons are activated by a group of Chebyshev orthogonal basis functions instead of sigmoid functions in this model. The error curves of the sum of squared error (SSE) varying with the number of hidden neurons and the iteration step are studied to determine the number of the hidden layer neurons. Simulation results of the half-bridge class-D power amplifier (CDPA) with two-tone signal and broadband signals as input have shown that the proposed behavioral modeling can reconstruct the system of CDPAs accurately and depict the memory effect of CDPAs well. Compared with Volterra-Laguerre (VL) model, Chebyshev neural network (CNN) model, and basic Elman neural network (BENN) model, the proposed model has better performance.

  6. Product Cost Management Structures: a review and neural network modelling

    Directory of Open Access Journals (Sweden)

    P. Jha

    2003-11-01

    Full Text Available This paper reviews the growth of approaches in product costing and draws synergies with information management and resource planning systems, to investigate potential application of state of the art modelling techniques of neural networks. Increasing demands on costing systems to serve multiple decision-making objectives, have made it essential to use better techniques for analysis of available data. This need is highlighted in the paper. The approach of neural networks, which have several analogous facets to complement and aid the information demands of modern product costing, Enterprise Resource Planning (ERP structures and the dominant-computing environment (for information management in the object oriented paradigm form the domain for investigation. Simulated data is used in neural network applications across activities that consume resources and deliver products, to generate information for monitoring and control decisions. The results in application for feature extraction and variation detection and their implications are presented in the paper.

  7. Numerical Analysis of Modeling Based on Improved Elman Neural Network

    Directory of Open Access Journals (Sweden)

    Shao Jie

    2014-01-01

    Full Text Available A modeling based on the improved Elman neural network (IENN is proposed to analyze the nonlinear circuits with the memory effect. The hidden layer neurons are activated by a group of Chebyshev orthogonal basis functions instead of sigmoid functions in this model. The error curves of the sum of squared error (SSE varying with the number of hidden neurons and the iteration step are studied to determine the number of the hidden layer neurons. Simulation results of the half-bridge class-D power amplifier (CDPA with two-tone signal and broadband signals as input have shown that the proposed behavioral modeling can reconstruct the system of CDPAs accurately and depict the memory effect of CDPAs well. Compared with Volterra-Laguerre (VL model, Chebyshev neural network (CNN model, and basic Elman neural network (BENN model, the proposed model has better performance.

  8. Inferring low-dimensional microstructure representations using convolutional neural networks

    Science.gov (United States)

    Lubbers, Nicholas; Lookman, Turab; Barros, Kipton

    2017-11-01

    We apply recent advances in machine learning and computer vision to a central problem in materials informatics: the statistical representation of microstructural images. We use activations in a pretrained convolutional neural network to provide a high-dimensional characterization of a set of synthetic microstructural images. Next, we use manifold learning to obtain a low-dimensional embedding of this statistical characterization. We show that the low-dimensional embedding extracts the parameters used to generate the images. According to a variety of metrics, the convolutional neural network method yields dramatically better embeddings than the analogous method derived from two-point correlations alone.

  9. Quantitative Structure-Activity Relationships of Noncompetitive Antagonists of the NMDA Receptor: A Study of a Series of MK801 Derivative Molecules Using Statistical Methods and Neural Network

    Directory of Open Access Journals (Sweden)

    T. Lakhlifi

    2003-04-01

    Full Text Available Abstract: From a series of 50 MK801 derivative molecules, a selected set of 44 compounds was submitted to a principal components analysis (PCA, a multiple regression analysis (MRA, and a neural network (NN. This study shows that the compounds' activity correlates reasonably well with the selected descriptors encoding the chemical structures. The correlation coefficients calculated by MRA and there after by NN, r = 0.986 and r = 0.974 respectively, are fairly good to evaluate a quantitative model, and to predict activity for MK801 derivatives. To test the performance of this model, the activities of the remained set of 6 compounds are deduced from the proposed quantitative model, by NN. This study proved that the predictive power of this model is relevant.

  10. Parameter Identification by Bayes Decision and Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1994-01-01

    The problem of parameter identification by Bayes point estimation using neural networks is investigated.......The problem of parameter identification by Bayes point estimation using neural networks is investigated....

  11. On The Comparison of Artificial Neural Network (ANN) and ...

    African Journals Online (AJOL)

    West African Journal of Industrial and Academic Research ... This work presented the results of an experimental comparison of two models: Multinomial Logistic Regression (MLR) and Artificial Neural Network (ANN) for ... Keywords: Multinomial Logistic Regression, Artificial Neural Network, Correct classification rate.

  12. A NEURAL OSCILLATOR-NETWORK MODEL OF TEMPORAL PATTERN GENERATION

    NARCIS (Netherlands)

    Schomaker, Lambert

    Most contemporary neural network models deal with essentially static, perceptual problems of classification and transformation. Models such as multi-layer feedforward perceptrons generally do not incorporate time as an essential dimension, whereas biological neural networks are inherently temporal

  13. Particle swarm optimization of a neural network model in a ...

    Indian Academy of Sciences (India)

    sets of cutting conditions and noting the root mean square (RMS) value of spindle motor current as well as ... A multi- objective optimization of hard turning using neural network modelling and swarm intelligence ... being used in this study), and these activated values in turn become the starting signals for the next adjacent ...

  14. Successful neural network projects at the Idaho National Engineering Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Cordes, G.A.

    1991-01-01

    This paper presents recent and current projects at the Idaho National Engineering Laboratory (INEL) that research and apply neural network technology. The projects are summarized in the paper and their direct application to space reactor power and propulsion systems activities is discussed. 9 refs., 10 figs., 3 tabs.

  15. Artificial Neural Networks for Modeling Knowing and Learning in Science.

    Science.gov (United States)

    Roth, Wolff-Michael

    2000-01-01

    Advocates artificial neural networks as models for cognition and development. Provides an example of how such models work in the context of a well-known Piagetian developmental task and school science activity: balance beam problems. (Contains 59 references.) (Author/WRM)

  16. Neural networks of human nature and nurture

    Directory of Open Access Journals (Sweden)

    Daniel S. Levine

    2009-11-01

    Full Text Available Neural network methods have facilitated the unification of several unfortunate splits in psychology, including nature versus nurture. We review the contributions of this methodology and then discuss tentative network theories of caring behavior, of uncaring behavior, and of how the frontal lobes are involved in the choices between them. The implications of our theory are optimistic about the prospects of society to encourage the human potential for caring.

  17. Network bursts in cortical neuronal cultures: 'noise - versus pacemaker'- driven neural network simulations

    NARCIS (Netherlands)

    Gritsun, T.; Stegenga, J.; le Feber, Jakob; Rutten, Wim

    2009-01-01

    In this paper we address the issue of spontaneous bursting activity in cortical neuronal cultures and explain what might cause this collective behavior using computer simulations of two different neural network models. While the common approach to acivate a passive network is done by introducing

  18. Neural network for sonogram gap filling

    DEFF Research Database (Denmark)

    Klebæk, Henrik; Jensen, Jørgen Arendt; Hansen, Lars Kai

    1995-01-01

    a neural network for predicting mean frequency of the velocity signal and its variance. The neural network then predicts the evolution of the mean and variance in the gaps, and the sonogram and audio signal are reconstructed from these. The technique is applied on in-vivo data from the carotid artery...... in the sonogram and in the audio signal, rendering the audio signal useless, thus making diagnosis difficult. The current goal for ultrasound scanners is to maintain a high refresh rate for the B-mode image and at the same time attain a high maximum velocity in the sonogram display. This precludes the intermixing...... series, and is shown to yield better results, i.e., the variances of the predictions are lower. The ability of the neural predictor to reconstruct both the sonogram and the audio signal, when only 50% of the time is used for velocity data acquisition, is demonstrated for the in-vivo data...

  19. Digital Neural Networks for New Media

    Science.gov (United States)

    Spaanenburg, Lambert; Malki, Suleyman

    Neural Networks perform computationally intensive tasks offering smart solutions for many new media applications. A number of analog and mixed digital/analog implementations have been proposed to smooth the algorithmic gap. But gradually, the digital implementation has become feasible, and the dedicated neural processor is on the horizon. A notable example is the Cellular Neural Network (CNN). The analog direction has matured for low-power, smart vision sensors; the digital direction is gradually being shaped into an IP-core for algorithm acceleration, especially for use in FPGA-based high-performance systems. The chapter discusses the next step towards a flexible and scalable multi-core engine using Application-Specific Integrated Processors (ASIP). This topographic engine can serve many new media tasks, as illustrated by novel applications in Homeland Security. We conclude with a view on the CNN kaleidoscope for the year 2020.

  20. Optimizing neural network models: motivation and case studies

    OpenAIRE

    Harp, S A; T. Samad

    2012-01-01

    Practical successes have been achieved  with neural network models in a variety of domains, including energy-related industry. The large, complex design space presented by neural networks is only minimally explored in current practice. The satisfactory results that nevertheless have been obtained testify that neural networks are a robust modeling technology; at the same time, however, the lack of a systematic design approach implies that the best neural network models generally  rem...

  1. Dynamic Object Identification with SOM-based neural networks

    Directory of Open Access Journals (Sweden)

    Aleksey Averkin

    2014-03-01

    Full Text Available In this article a number of neural networks based on self-organizing maps, that can be successfully used for dynamic object identification, is described. Unique SOM-based modular neural networks with vector quantized associative memory and recurrent self-organizing maps as modules are presented. The structured algorithms of learning and operation of such SOM-based neural networks are described in details, also some experimental results and comparison with some other neural networks are given.

  2. Stock Price Prediction Based on Procedural Neural Networks

    OpenAIRE

    Jiuzhen Liang; Wei Song; Mei Wang

    2011-01-01

    We present a spatiotemporal model, namely, procedural neural networks for stock price prediction. Compared with some successful traditional models on simulating stock market, such as BNN (backpropagation neural networks, HMM (hidden Markov model) and SVM (support vector machine)), the procedural neural network model processes both spacial and temporal information synchronously without slide time window, which is typically used in the well-known recurrent neural networks. Two differen...

  3. Computational capabilities of graph neural networks.

    Science.gov (United States)

    Scarselli, Franco; Gori, Marco; Tsoi, Ah Chung; Hagenbuchner, Markus; Monfardini, Gabriele

    2009-01-01

    In this paper, we will consider the approximation properties of a recently introduced neural network model called graph neural network (GNN), which can be used to process-structured data inputs, e.g., acyclic graphs, cyclic graphs, and directed or undirected graphs. This class of neural networks implements a function tau(G,n) is an element of IR(m) that maps a graph G and one of its nodes n onto an m-dimensional Euclidean space. We characterize the functions that can be approximated by GNNs, in probability, up to any prescribed degree of precision. This set contains the maps that satisfy a property called preservation of the unfolding equivalence, and includes most of the practically useful functions on graphs; the only known exception is when the input graph contains particular patterns of symmetries when unfolding equivalence may not be preserved. The result can be considered an extension of the universal approximation property established for the classic feedforward neural networks (FNNs). Some experimental examples are used to show the computational capabilities of the proposed model.

  4. Parameter estimation using compensatory neural networks

    Indian Academy of Sciences (India)

    Proposed here is a new neuron model, a basis for Compensatory Neural Network Architecture (CNNA), which not only reduces the total number of interconnections among neurons but also reduces the total computing time for training. The suggested model has properties of the basic neuron model as well as the higher ...

  5. Based on BP Neural Network Stock Prediction

    Science.gov (United States)

    Liu, Xiangwei; Ma, Xin

    2012-01-01

    The stock market has a high profit and high risk features, on the stock market analysis and prediction research has been paid attention to by people. Stock price trend is a complex nonlinear function, so the price has certain predictability. This article mainly with improved BP neural network (BPNN) to set up the stock market prediction model, and…

  6. Epileptiform spike detection via convolutional neural networks

    DEFF Research Database (Denmark)

    Johansen, Alexander Rosenberg; Jin, Jing; Maszczyk, Tomasz

    2016-01-01

    The EEG of epileptic patients often contains sharp waveforms called "spikes", occurring between seizures. Detecting such spikes is crucial for diagnosing epilepsy. In this paper, we develop a convolutional neural network (CNN) for detecting spikes in EEG of epileptic patients in an automated...

  7. Artificial neural networks and support vector mac

    Indian Academy of Sciences (India)

    Quantitative structure-property relationships of electroluminescent materials: Artificial neural networks and support vector machines to predict electroluminescence of organic molecules. ALANA FERNANDES GOLIN and RICARDO STEFANI. ∗. Laboratório de Estudos de Materiais (LEMAT), Instituto de Ciências Exatas e da ...

  8. Neural Networks for protein Structure Prediction

    DEFF Research Database (Denmark)

    Bohr, Henrik

    1998-01-01

    This is a review about neural network applications in bioinformatics. Especially the applications to protein structure prediction, e.g. prediction of secondary structures, prediction of surface structure, fold class recognition and prediction of the 3-dimensional structure of protein backbones...

  9. Towards semen quality assessment using neural networks

    DEFF Research Database (Denmark)

    Linneberg, Christian; Salamon, P.; Svarer, C.

    1994-01-01

    The paper presents the methodology and results from a neural net based classification of human sperm head morphology. The methodology uses a preprocessing scheme in which invariant Fourier descriptors are lumped into “energy” bands. The resulting networks are pruned using optimal brain damage...

  10. Convolutional Neural Networks for SAR Image Segmentation

    DEFF Research Database (Denmark)

    Malmgren-Hansen, David; Nobel-Jørgensen, Morten

    2015-01-01

    Segmentation of Synthetic Aperture Radar (SAR) images has several uses, but it is a difficult task due to a number of properties related to SAR images. In this article we show how Convolutional Neural Networks (CNNs) can easily be trained for SAR image segmentation with good results. Besides...

  11. Convolutional Neural Networks - Generalizability and Interpretations

    DEFF Research Database (Denmark)

    Malmgren-Hansen, David

    from data despite it being limited in amount or context representation. Within Machine Learning this thesis focuses on Convolutional Neural Networks for Computer Vision. The research aims to answer how to explore a model's generalizability to the whole population of data samples and how to interpret...

  12. Visualization of neural networks using saliency maps

    DEFF Research Database (Denmark)

    Mørch, Niels J.S.; Kjems, Ulrik; Hansen, Lars Kai

    1995-01-01

    The saliency map is proposed as a new method for understanding and visualizing the nonlinearities embedded in feedforward neural networks, with emphasis on the ill-posed case, where the dimensionality of the input-field by far exceeds the number of examples. Several levels of approximations...

  13. Separable explanations of neural network decisions

    DEFF Research Database (Denmark)

    Rieger, Laura

    2017-01-01

    Deep Taylor Decomposition is a method used to explain neural network decisions. When applying this method to non-dominant classifications, the resulting explanation does not reflect important features for the chosen classification. We propose that this is caused by the dense layers and propose...

  14. Fast Fingerprint Classification with Deep Neural Network

    DEFF Research Database (Denmark)

    Michelsanti, Daniel; Guichi, Yanis; Ene, Andreea-Daniela

    2017-01-01

    . In this work we evaluate the performance of two pre-trained convolutional neural networks fine-tuned on the NIST SD4 benchmark database. The obtained results show that this approach is comparable with other results in the literature, with the advantage of a fast feature extraction stage....

  15. Empirical generalization assessment of neural network models

    DEFF Research Database (Denmark)

    Larsen, Jan; Hansen, Lars Kai

    1995-01-01

    This paper addresses the assessment of generalization performance of neural network models by use of empirical techniques. We suggest to use the cross-validation scheme combined with a resampling technique to obtain an estimate of the generalization performance distribution of a specific model...

  16. drinking water treatment using artificial neural network

    African Journals Online (AJOL)

    ogwueleka

    synaptic weights are used to store the knowledge.” The neural network approach is a branch of artificial intelligence. The ANN is based on a model of the human neurological system that consists of basic computing elements (called neurons) interconnected together (Figure 1). The model used for all classification attempts.

  17. Artificial neural networks in neutron dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.; Mercado, G.A.; Perales M, W.A.; Robles R, J.A. [Unidades Academicas de Estudios Nucleares, UAZ, A.P. 336, 98000 Zacatecas (Mexico); Gallego, E.; Lorente, A. [Depto. de Ingenieria Nuclear, Universidad Politecnica de Madrid, (Spain)

    2005-07-01

    An artificial neural network has been designed to obtain the neutron doses using only the Bonner spheres spectrometer's count rates. Ambient, personal and effective neutron doses were included. 187 neutron spectra were utilized to calculate the Bonner count rates and the neutron doses. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. Re-binned spectra, UTA4 response matrix and fluence-to-dose coefficients were used to calculate the count rates in Bonner spheres spectrometer and the doses. Count rates were used as input and the respective doses were used as output during neural network training. Training and testing was carried out in Mat lab environment. The artificial neural network performance was evaluated using the {chi}{sup 2}- test, where the original and calculated doses were compared. The use of Artificial Neural Networks in neutron dosimetry is an alternative procedure that overcomes the drawbacks associated in this ill-conditioned problem. (Author)

  18. Learning chaotic attractors by neural networks

    NARCIS (Netherlands)

    Bakker, R; Schouten, JC; Giles, CL; Takens, F; van den Bleek, CM

    2000-01-01

    An algorithm is introduced that trains a neural network to identify chaotic dynamics from a single measured time series. During training, the algorithm learns to short-term predict the time series. At the same time a criterion, developed by Diks, van Zwet, Takens, and de Goede (1996) is monitored

  19. Nonlinear Time Series Analysis via Neural Networks

    Science.gov (United States)

    Volná, Eva; Janošek, Michal; Kocian, Václav; Kotyrba, Martin

    This article deals with a time series analysis based on neural networks in order to make an effective forex market [Moore and Roche, J. Int. Econ. 58, 387-411 (2002)] pattern recognition. Our goal is to find and recognize important patterns which repeatedly appear in the market history to adapt our trading system behaviour based on them.

  20. Neural networks, penalty logic and optimality theory

    NARCIS (Netherlands)

    Blutner, R.; Benz, A.; Blutner, R.

    2009-01-01

    Ever since the discovery of neural networks, there has been a controversy between two modes of information processing. On the one hand, symbolic systems have proven indispensable for our understanding of higher intelligence, especially when cognitive domains like language and reasoning are examined.

  1. Image inpainting using a neural network

    Directory of Open Access Journals (Sweden)

    Gapon Nikolay

    2017-01-01

    Full Text Available The paper describes a new method of two-dimensional signals reconstruction by restoring static images. A new method of spatial reconstruction of static images based on a geometric model using a neural network is proposed, it is based on the search for similar blocks and copying them into the region of distorted or missing pixel values.

  2. Computational modeling of neural plasticity for self-organization of neural networks.

    Science.gov (United States)

    Chrol-Cannon, Joseph; Jin, Yaochu

    2014-11-01

    Self-organization in biological nervous systems during the lifetime is known to largely occur through a process of plasticity that is dependent upon the spike-timing activity in connected neurons. In the field of computational neuroscience, much effort has been dedicated to building up computational models of neural plasticity to replicate experimental data. Most recently, increasing attention has been paid to understanding the role of neural plasticity in functional and structural neural self-organization, as well as its influence on the learning performance of neural networks for accomplishing machine learning tasks such as classification and regression. Although many ideas and hypothesis have been suggested, the relationship between the structure, dynamics and learning performance of neural networks remains elusive. The purpose of this article is to review the most important computational models for neural plasticity and discuss various ideas about neural plasticity's role. Finally, we suggest a few promising research directions, in particular those along the line that combines findings in computational neuroscience and systems biology, and their synergetic roles in understanding learning, memory and cognition, thereby bridging the gap between computational neuroscience, systems biology and computational intelligence. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  3. Forecasting Water Levels Using Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Shreenivas N. Londhe

    2011-06-01

    Full Text Available For all Ocean related activities it is necessary to predict the actual water levels as accurate as possible. The present work aims at predicting the water levels with a lead time of few hours to a day using the technique of artificial neural networks. Instead of using the previous and current values of observed water level time series directly as input and output the water level anomaly (difference between the observed water level and harmonically predicted tidal level is calculated for each hour and the ANN model is developed using this time series. The network predicted anomaly is then added to harmonic tidal level to predict the water levels. The exercise is carried out at six locations, two in The Gulf of Mexico, two in The Gulf of Maine and two in The Gulf of Alaska along the USA coastline. The ANN models performed reasonably well for all forecasting intervals at all the locations. The ANN models were also run in real time mode for a period of eight months. Considering the hurricane season in Gulf of Mexico the models were also tested particularly during hurricanes.

  4. Foetal ECG recovery using dynamic neural networks.

    Science.gov (United States)

    Camps-Valls, Gustavo; Martínez-Sober, Marcelino; Soria-Olivas, Emilio; Magdalena-Benedito, Rafael; Calpe-Maravilla, Javier; Guerrero-Martínez, Juan

    2004-07-01

    Non-invasive electrocardiography has proven to be a very interesting method for obtaining information about the foetus state and thus to assure its well-being during pregnancy. One of the main applications in this field is foetal electrocardiogram (ECG) recovery by means of automatic methods. Evident problems found in the literature are the limited number of available registers, the lack of performance indicators, and the limited use of non-linear adaptive methods. In order to circumvent these problems, we first introduce the generation of synthetic registers and discuss the influence of different kinds of noise to the modelling. Second, a method which is based on numerical (correlation coefficient) and statistical (analysis of variance, ANOVA) measures allows us to select the best recovery model. Finally, finite impulse response (FIR) and gamma neural networks are included in the adaptive noise cancellation (ANC) scheme in order to provide highly non-linear, dynamic capabilities to the recovery model. Neural networks are benchmarked with classical adaptive methods such as the least mean squares (LMS) and the normalized LMS (NLMS) algorithms in simulated and real registers and some conclusions are drawn. For synthetic registers, the most determinant factor in the identification of the models is the foetal-maternal signal-to-noise ratio (SNR). In addition, as the electromyogram contribution becomes more relevant, neural networks clearly outperform the LMS-based algorithm. From the ANOVA test, we found statistical differences between LMS-based models and neural models when complex situations (high foetal-maternal and foetal-noise SNRs) were present. These conclusions were confirmed after doing robustness tests on synthetic registers, visual inspection of the recovered signals and calculation of the recognition rates of foetal R-peaks for real situations. Finally, the best compromise between model complexity and outcomes was provided by the FIR neural network. Both

  5. MBVCNN: Joint convolutional neural networks method for image recognition

    Science.gov (United States)

    Tong, Tong; Mu, Xiaodong; Zhang, Li; Yi, Zhaoxiang; Hu, Pei

    2017-05-01

    Aiming at the problem of objects in image recognition rectangle, but objects which are input into convolutional neural networks square, the object recognition model was put forward which was based on BING method to realize object estimate, used vectorization of convolutional neural networks to realize input square image in convolutional networks, therefore, built joint convolution neural networks, which achieve multiple size image input. Verified by experiments, the accuracy of multi-object image recognition was improved by 6.70% compared with single vectorization of convolutional neural networks. Therefore, image recognition method of joint convolutional neural networks can enhance the accuracy in image recognition, especially for target in rectangular shape.

  6. Analysis of neural networks in terms of domain functions

    NARCIS (Netherlands)

    van der Zwaag, B.J.; Slump, Cornelis H.; Spaanenburg, Lambert

    Despite their success-story, artificial neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a "magic tool" but possibly even more as a

  7. Extracting knowledge from supervised neural networks in image processing

    NARCIS (Netherlands)

    van der Zwaag, B.J.; Slump, Cornelis H.; Spaanenburg, Lambert; Jain, R.; Abraham, A.; Faucher, C.; van der Zwaag, B.J.

    Despite their success-story, artificial neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a “magic tool��? but possibly even more as a

  8. neural network based load frequency control for restructuring power

    African Journals Online (AJOL)

    2012-03-01

    Mar 1, 2012 ... Abstract. In this study, an artificial neural network (ANN) application of load frequency control. (LFC) of a Multi-Area power system by using a neural network controller is presented. The comparison between a conventional Proportional Integral (PI) controller and the proposed artificial neural networks ...

  9. Artificial Neural Network Modeling of an Inverse Fluidized Bed ...

    African Journals Online (AJOL)

    The application of neural networks to model a laboratory scale inverse fluidized bed reactor has been studied. A Radial Basis Function neural network has been successfully employed for the modeling of the inverse fluidized bed reactor. In the proposed model, the trained neural network represents the kinetics of biological ...

  10. Time series prediction with simple recurrent neural networks ...

    African Journals Online (AJOL)

    Simple recurrent neural networks are widely used in time series prediction. Most researchers and application developers often choose arbitrarily between Elman or Jordan simple recurrent neural networks for their applications. A hybrid of the two called Elman-Jordan (or Multi-recurrent) neural network is also being used.

  11. Application of radial basis neural network for state estimation of ...

    African Journals Online (AJOL)

    user

    An original application of radial basis function (RBF) neural network for power system state estimation is proposed in this paper. The property of massive parallelism of neural networks is employed for this. The application of RBF neural network for state estimation is investigated by testing its applicability on a IEEE 14 bus ...

  12. New Neural Network Methods for Forecasting Regional Employment

    NARCIS (Netherlands)

    Patuelli, R.; Reggiani, A; Nijkamp, P.; Blien, U.

    2006-01-01

    In this paper, a set of neural network (NN) models is developed to compute short-term forecasts of regional employment patterns in Germany. Neural networks are modern statistical tools based on learning algorithms that are able to process large amounts of data. Neural networks are enjoying

  13. The Artifical Neural Network as means for modeling Nonlinear Systems

    OpenAIRE

    Drábek Oldøich; Taufer Ivan

    1998-01-01

    The paper deals with nonlinear system identification based on neural network. The topic of this publication is simulation of training and testing a neural network. A contribution is assigned to technologists which are good at the clasical identification problems but their knowledges about identification based on neural network are only on the stage of theoretical bases.

  14. The Artifical Neural Network as means for modeling Nonlinear Systems

    Directory of Open Access Journals (Sweden)

    Drábek Oldøich

    1998-12-01

    Full Text Available The paper deals with nonlinear system identification based on neural network. The topic of this publication is simulation of training and testing a neural network. A contribution is assigned to technologists which are good at the clasical identification problems but their knowledges about identification based on neural network are only on the stage of theoretical bases.

  15. Algorithm For A Self-Growing Neural Network

    Science.gov (United States)

    Cios, Krzysztof J.

    1996-01-01

    CID3 algorithm simulates self-growing neural network. Constructs decision trees equivalent to hidden layers of neural network. Based on ID3 algorithm, which dynamically generates decision tree while minimizing entropy of information. CID3 algorithm generates feedforward neural network by use of either crisp or fuzzy measure of entropy.

  16. Iterative free-energy optimization for recurrent neural networks (INFERNO)

    Science.gov (United States)

    2017-01-01

    The intra-parietal lobe coupled with the Basal Ganglia forms a working memory that demonstrates strong planning capabilities for generating robust yet flexible neuronal sequences. Neurocomputational models however, often fails to control long range neural synchrony in recurrent spiking networks due to spontaneous activity. As a novel framework based on the free-energy principle, we propose to see the problem of spikes’ synchrony as an optimization problem of the neurons sub-threshold activity for the generation of long neuronal chains. Using a stochastic gradient descent, a reinforcement signal (presumably dopaminergic) evaluates the quality of one input vector to move the recurrent neural network to a desired activity; depending on the error made, this input vector is strengthened to hill-climb the gradient or elicited to search for another solution. This vector can be learned then by one associative memory as a model of the basal-ganglia to control the recurrent neural network. Experiments on habit learning and on sequence retrieving demonstrate the capabilities of the dual system to generate very long and precise spatio-temporal sequences, above two hundred iterations. Its features are applied then to the sequential planning of arm movements. In line with neurobiological theories, we discuss its relevance for modeling the cortico-basal working memory to initiate flexible goal-directed neuronal chains of causation and its relation to novel architectures such as Deep Networks, Neural Turing Machines and the Free-Energy Principle. PMID:28282439

  17. Iterative free-energy optimization for recurrent neural networks (INFERNO).

    Science.gov (United States)

    Pitti, Alexandre; Gaussier, Philippe; Quoy, Mathias

    2017-01-01

    The intra-parietal lobe coupled with the Basal Ganglia forms a working memory that demonstrates strong planning capabilities for generating robust yet flexible neuronal sequences. Neurocomputational models however, often fails to control long range neural synchrony in recurrent spiking networks due to spontaneous activity. As a novel framework based on the free-energy principle, we propose to see the problem of spikes' synchrony as an optimization problem of the neurons sub-threshold activity for the generation of long neuronal chains. Using a stochastic gradient descent, a reinforcement signal (presumably dopaminergic) evaluates the quality of one input vector to move the recurrent neural network to a desired activity; depending on the error made, this input vector is strengthened to hill-climb the gradient or elicited to search for another solution. This vector can be learned then by one associative memory as a model of the basal-ganglia to control the recurrent neural network. Experiments on habit learning and on sequence retrieving demonstrate the capabilities of the dual system to generate very long and precise spatio-temporal sequences, above two hundred iterations. Its features are applied then to the sequential planning of arm movements. In line with neurobiological theories, we discuss its relevance for modeling the cortico-basal working memory to initiate flexible goal-directed neuronal chains of causation and its relation to novel architectures such as Deep Networks, Neural Turing Machines and the Free-Energy Principle.

  18. Iterative free-energy optimization for recurrent neural networks (INFERNO.

    Directory of Open Access Journals (Sweden)

    Alexandre Pitti

    Full Text Available The intra-parietal lobe coupled with the Basal Ganglia forms a working memory that demonstrates strong planning capabilities for generating robust yet flexible neuronal sequences. Neurocomputational models however, often fails to control long range neural synchrony in recurrent spiking networks due to spontaneous activity. As a novel framework based on the free-energy principle, we propose to see the problem of spikes' synchrony as an optimization problem of the neurons sub-threshold activity for the generation of long neuronal chains. Using a stochastic gradient descent, a reinforcement signal (presumably dopaminergic evaluates the quality of one input vector to move the recurrent neural network to a desired activity; depending on the error made, this input vector is strengthened to hill-climb the gradient or elicited to search for another solution. This vector can be learned then by one associative memory as a model of the basal-ganglia to control the recurrent neural network. Experiments on habit learning and on sequence retrieving demonstrate the capabilities of the dual system to generate very long and precise spatio-temporal sequences, above two hundred iterations. Its features are applied then to the sequential planning of arm movements. In line with neurobiological theories, we discuss its relevance for modeling the cortico-basal working memory to initiate flexible goal-directed neuronal chains of causation and its relation to novel architectures such as Deep Networks, Neural Turing Machines and the Free-Energy Principle.

  19. Optical implementation of neural networks

    Science.gov (United States)

    Yu, Francis T. S.; Guo, Ruyan

    2002-12-01

    An adaptive optical neuro-computing (ONC) using inexpensive pocket size liquid crystal televisions (LCTVs) had been developed by the graduate students in the Electro-Optics Laboratory at The Pennsylvania State University. Although this neuro-computing has only 8×8=64 neurons, it can be easily extended to 16×20=320 neurons. The major advantages of this LCTV architecture as compared with other reported ONCs, are low cost and the flexibility to operate. To test the performance, several neural net models are used. These models are Interpattern Association, Hetero-association and unsupervised learning algorithms. The system design considerations and experimental demonstrations are also included.

  20. Identifying Jets Using Artifical Neural Networks

    Science.gov (United States)

    Rosand, Benjamin; Caines, Helen; Checa, Sofia

    2017-09-01

    We investigate particle jet interactions with the Quark Gluon Plasma (QGP) using artificial neural networks modeled on those used in computer image recognition. We create jet images by binning jet particles into pixels and preprocessing every image. We analyzed the jets with a Multi-layered maxout network and a convolutional network. We demonstrate each network's effectiveness in differentiating simulated quenched jets from unquenched jets, and we investigate the method that the network uses to discriminate among different quenched jet simulations. Finally, we develop a greater understanding of the physics behind quenched jets by investigating what the network learnt as well as its effectiveness in differentiating samples. Yale College Freshman Summer Research Fellowship in the Sciences and Engineering.

  1. Artificial neural networks as quantum associative memory

    Science.gov (United States)

    Hamilton, Kathleen; Schrock, Jonathan; Imam, Neena; Humble, Travis

    We present results related to the recall accuracy and capacity of Hopfield networks implemented on commercially available quantum annealers. The use of Hopfield networks and artificial neural networks as content-addressable memories offer robust storage and retrieval of classical information, however, implementation of these models using currently available quantum annealers faces several challenges: the limits of precision when setting synaptic weights, the effects of spurious spin-glass states and minor embedding of densely connected graphs into fixed-connectivity hardware. We consider neural networks which are less than fully-connected, and also consider neural networks which contain multiple sparsely connected clusters. We discuss the effect of weak edge dilution on the accuracy of memory recall, and discuss how the multiple clique structure affects the storage capacity. Our work focuses on storage of patterns which can be embedded into physical hardware containing n States Department of Defense and used resources of the Computational Research and Development Programs as Oak Ridge National Laboratory under Contract No. DE-AC0500OR22725 with the U. S. Department of Energy.

  2. Hybrid discrete-time neural networks.

    Science.gov (United States)

    Cao, Hongjun; Ibarz, Borja

    2010-11-13

    Hybrid dynamical systems combine evolution equations with state transitions. When the evolution equations are discrete-time (also called map-based), the result is a hybrid discrete-time system. A class of biological neural network models that has recently received some attention falls within this category: map-based neuron models connected by means of fast threshold modulation (FTM). FTM is a connection scheme that aims to mimic the switching dynamics of a neuron subject to synaptic inputs. The dynamic equations of the neuron adopt different forms according to the state (either firing or not firing) and type (excitatory or inhibitory) of their presynaptic neighbours. Therefore, the mathematical model of one such network is a combination of discrete-time evolution equations with transitions between states, constituting a hybrid discrete-time (map-based) neural network. In this paper, we review previous work within the context of these models, exemplifying useful techniques to analyse them. Typical map-based neuron models are low-dimensional and amenable to phase-plane analysis. In bursting models, fast-slow decomposition can be used to reduce dimensionality further, so that the dynamics of a pair of connected neurons can be easily understood. We also discuss a model that includes electrical synapses in addition to chemical synapses with FTM. Furthermore, we describe how master stability functions can predict the stability of synchronized states in these networks. The main results are extended to larger map-based neural networks.

  3. Computationally Efficient Neural Network Intrusion Security Awareness

    Energy Technology Data Exchange (ETDEWEB)

    Todd Vollmer; Milos Manic

    2009-08-01

    An enhanced version of an algorithm to provide anomaly based intrusion detection alerts for cyber security state awareness is detailed. A unique aspect is the training of an error back-propagation neural network with intrusion detection rule features to provide a recognition basis. Network packet details are subsequently provided to the trained network to produce a classification. This leverages rule knowledge sets to produce classifications for anomaly based systems. Several test cases executed on ICMP protocol revealed a 60% identification rate of true positives. This rate matched the previous work, but 70% less memory was used and the run time was reduced to less than 1 second from 37 seconds.

  4. Matrix representation of a Neural Network

    DEFF Research Database (Denmark)

    Christensen, Bjørn Klint

    Processing, by David Rummelhart (Rummelhart 1986) for an easy-to-read introduction. What the paper does explain is how a matrix representation of a neural net allows for a very simple implementation. The matrix representation is introduced in (Rummelhart 1986, chapter 9), but only for a two-layer linear...... network and the feedforward algorithm. This paper develops the idea further to three-layer non-linear networks and the backpropagation algorithm. Figure 1 shows the layout of a three-layer network. There are I input nodes, J hidden nodes and K output nodes all indexed from 0. Bias-node for the hidden...

  5. Reconstruction of periodic signals using neural networks

    Directory of Open Access Journals (Sweden)

    José Danilo Rairán Antolines

    2014-01-01

    Full Text Available In this paper, we reconstruct a periodic signal by using two neural networks. The first network is trained to approximate the period of a signal, and the second network estimates the corresponding coefficients of the signal's Fourier expansion. The reconstruction strategy consists in minimizing the mean-square error via backpro-pagation algorithms over a single neuron with a sine transfer function. Additionally, this paper presents mathematical proof about the quality of the approximation as well as a first modification of the algorithm, which requires less data to reach the same estimation; thus making the algorithm suitable for real-time implementations.

  6. Neural networks: Application to medical imaging

    Science.gov (United States)

    Clarke, Laurence P.

    1994-01-01

    The research mission is the development of computer assisted diagnostic (CAD) methods for improved diagnosis of medical images including digital x-ray sensors and tomographic imaging modalities. The CAD algorithms include advanced methods for adaptive nonlinear filters for image noise suppression, hybrid wavelet methods for feature segmentation and enhancement, and high convergence neural networks for feature detection and VLSI implementation of neural networks for real time analysis. Other missions include (1) implementation of CAD methods on hospital based picture archiving computer systems (PACS) and information networks for central and remote diagnosis and (2) collaboration with defense and medical industry, NASA, and federal laboratories in the area of dual use technology conversion from defense or aerospace to medicine.

  7. Fuzzy logic and neural network technologies

    Science.gov (United States)

    Villarreal, James A.; Lea, Robert N.; Savely, Robert T.

    1992-01-01

    Applications of fuzzy logic technologies in NASA projects are reviewed to examine their advantages in the development of neural networks for aerospace and commercial expert systems and control. Examples of fuzzy-logic applications include a 6-DOF spacecraft controller, collision-avoidance systems, and reinforcement-learning techniques. The commercial applications examined include a fuzzy autofocusing system, an air conditioning system, and an automobile transmission application. The practical use of fuzzy logic is set in the theoretical context of artificial neural systems (ANSs) to give the background for an overview of ANS research programs at NASA. The research and application programs include the Network Execution and Training Simulator and faster training algorithms such as the Difference Optimized Training Scheme. The networks are well suited for pattern-recognition applications such as predicting sunspots, controlling posture maintenance, and conducting adaptive diagnoses.

  8. A Topological Perspective of Neural Network Structure

    Science.gov (United States)

    Sizemore, Ann; Giusti, Chad; Cieslak, Matthew; Grafton, Scott; Bassett, Danielle

    The wiring patterns of white matter tracts between brain regions inform functional capabilities of the neural network. Indeed, densely connected and cyclically arranged cognitive systems may communicate and thus perform distinctly. However, previously employed graph theoretical statistics are local in nature and thus insensitive to such global structure. Here we present an investigation of the structural neural network in eight healthy individuals using persistent homology. An extension of homology to weighted networks, persistent homology records both circuits and cliques (all-to-all connected subgraphs) through a repetitive thresholding process, thus perceiving structural motifs. We report structural features found across patients and discuss brain regions responsible for these patterns, finally considering the implications of such motifs in relation to cognitive function.

  9. Neural-network-based voice-tracking algorithm

    Science.gov (United States)

    Baker, Mary; Stevens, Charise; Chaparro, Brennen; Paschall, Dwayne

    2002-11-01

    A voice-tracking algorithm was developed and tested for the purposes of electronically separating the voice signals of simultaneous talkers. Many individuals suffer from hearing disorders that often inhibit their ability to focus on a single speaker in a multiple speaker environment (the cocktail party effect). Digital hearing aid technology makes it possible to implement complex algorithms for speech processing in both the time and frequency domains. In this work, an average magnitude difference function (AMDF) was performed on mixed voice signals in order to determine the fundamental frequencies present in the signals. A time prediction neural network was trained to recognize normal human voice inflection patterns, including rising, falling, rising-falling, and falling-rising patterns. The neural network was designed to track the fundamental frequency of a single talker based on the training procedure. The output of the neural network can be used to design an active filter for speaker segregation. Tests were done using audio mixing of two to three speakers uttering short phrases. The AMDF function accurately identified the fundamental frequencies present in the signal. The neural network was tested using a single speaker uttering a short sentence. The network accurately tracked the fundamental frequency of the speaker.

  10. Evaluating Functional Autocorrelation within Spatially Distributed Neural Processing Networks*

    Science.gov (United States)

    Derado, Gordana; Bowman, F. Dubois; Ely, Timothy D.; Kilts, Clinton D.

    2010-01-01

    Data-driven statistical approaches, such as cluster analysis or independent component analysis, applied to in vivo functional neuroimaging data help to identify neural processing networks that exhibit similar task-related or restingstate patterns of activity. Ideally, the measured brain activity for voxels within such networks should exhibit high autocorrelation. An important limitation is that the algorithms do not typically quantify or statistically test the strength or nature of the within-network relatedness between voxels. To extend the results given by such data-driven analyses, we propose the use of Moran’s I statistic to measure the degree of functional autocorrelation within identified neural processing networks and to evaluate the statistical significance of the observed associations. We adapt the conventional definition of Moran’s I, for applicability to neuroimaging analyses, by defining the global autocorrelation index using network-based neighborhoods. Also, we compute network-specific contributions to the overall autocorrelation. We present results from a bootstrap analysis that provide empirical support for the use of our hypothesis testing framework. We illustrate our methodology using positron emission tomography (PET) data from a study that examines the neural representation of working memory among individuals with schizophrenia and functional magnetic resonance imaging (fMRI) data from a study of depression. PMID:21643436

  11. Phase Diagram of Spiking Neural Networks

    Directory of Open Access Journals (Sweden)

    Hamed eSeyed-Allaei

    2015-03-01

    Full Text Available In computer simulations of spiking neural networks, often it is assumed that every two neurons of the network are connected by a probablilty of 2%, 20% of neurons are inhibitory and 80% are excitatory. These common values are based on experiments, observations. but here, I take a different perspective, inspired by evolution. I simulate many networks, each with a different set of parameters, and then I try to figure out what makes the common values desirable by nature. Networks which are configured according to the common values, have the best dynamic range in response to an impulse and their dynamic range is more robust in respect to synaptic weights. In fact, evolution has favored networks of best dynamic range. I present a phase diagram that shows the dynamic ranges of different networks of different parameteres. This phase diagram gives an insight into the space of parameters -- excitatory to inhibitory ratio, sparseness of connections and synaptic weights. It may serve as a guideline to decide about the values of parameters in a simulation of spiking neural network.

  12. Fuzzy logic and neural networks basic concepts & application

    CERN Document Server

    Alavala, Chennakesava R

    2008-01-01

    About the Book: The primary purpose of this book is to provide the student with a comprehensive knowledge of basic concepts of fuzzy logic and neural networks. The hybridization of fuzzy logic and neural networks is also included. No previous knowledge of fuzzy logic and neural networks is required. Fuzzy logic and neural networks have been discussed in detail through illustrative examples, methods and generic applications. Extensive and carefully selected references is an invaluable resource for further study of fuzzy logic and neural networks. Each chapter is followed by a question bank

  13. A Squeezed Artificial Neural Network for the Symbolic Network Reliability Functions of Binary-State Networks.

    Science.gov (United States)

    Yeh, Wei-Chang

    Network reliability is an important index to the provision of useful information for decision support in the modern world. There is always a need to calculate symbolic network reliability functions (SNRFs) due to dynamic and rapid changes in network parameters. In this brief, the proposed squeezed artificial neural network (SqANN) approach uses the Monte Carlo simulation to estimate the corresponding reliability of a given designed matrix from the Box-Behnken design, and then the Taguchi method is implemented to find the appropriate number of neurons and activation functions of the hidden layer and the output layer in ANN to evaluate SNRFs. According to the experimental results of the benchmark networks, the comparison appears to support the superiority of the proposed SqANN method over the traditional ANN-based approach with at least 16.6% improvement in the median absolute deviation in the cost of extra 2 s on average for all experiments.Network reliability is an important index to the provision of useful information for decision support in the modern world. There is always a need to calculate symbolic network reliability functions (SNRFs) due to dynamic and rapid changes in network parameters. In this brief, the proposed squeezed artificial neural network (SqANN) approach uses the Monte Carlo simulation to estimate the corresponding reliability of a given designed matrix from the Box-Behnken design, and then the Taguchi method is implemented to find the appropriate number of neurons and activation functions of the hidden layer and the output layer in ANN to evaluate SNRFs. According to the experimental results of the benchmark networks, the comparison appears to support the superiority of the proposed SqANN method over the traditional ANN-based approach with at least 16.6% improvement in the median absolute deviation in the cost of extra 2 s on average for all experiments.

  14. GXNOR-Net: Training deep neural networks with ternary weights and activations without full-precision memory under a unified discretization framework.

    Science.gov (United States)

    Deng, Lei; Jiao, Peng; Pei, Jing; Wu, Zhenzhi; Li, Guoqi

    2018-02-02

    Although deep neural networks (DNNs) are being a revolutionary power to open up the AI era, the notoriously huge hardware overhead has challenged their applications. Recently, several binary and ternary networks, in which the costly multiply-accumulate operations can be replaced by accumulations or even binary logic operations, make the on-chip training of DNNs quite promising. Therefore there is a pressing need to build an architecture that could subsume these networks under a unified framework that achieves both higher performance and less overhead. To this end, two fundamental issues are yet to be addressed. The first one is how to implement the back propagation when neuronal activations are discrete. The second one is how to remove the full-precision hidden weights in the training phase to break the bottlenecks of memory/computation consumption. To address the first issue, we present a multi-step neuronal activation discretization method and a derivative approximation technique that enable the implementing the back propagation algorithm on discrete DNNs. While for the second issue, we propose a discrete state transition (DST) methodology to constrain the weights in a discrete space without saving the hidden weights. Through this way, we build a unified framework that subsumes the binary or ternary networks as its special cases, and under which a heuristic algorithm is provided at the website https://github.com/AcrossV/Gated-XNOR. More particularly, we find that when both the weights and activations become ternary values, the DNNs can be reduced to sparse binary networks, termed as gated XNOR networks (GXNOR-Nets) since only the event of non-zero weight and non-zero activation enables the control gate to start the XNOR logic operations in the original binary networks. This promises the event-driven hardware design for efficient mobile intelligence. We achieve advanced performance compared with state-of-the-art algorithms. Furthermore, the computational sparsity

  15. Neural Networks for Beat Perception in Musical Rhythm.

    Science.gov (United States)

    Large, Edward W; Herrera, Jorge A; Velasco, Marc J

    2015-01-01

    Entrainment of cortical rhythms to acoustic rhythms has been hypothesized to be the neural correlate of pulse and meter perception in music. Dynamic attending theory first proposed synchronization of endogenous perceptual rhythms nearly 40 years ago, but only recently has the pivotal role of neural synchrony been demonstrated. Significant progress has since been made in understanding the role of neural oscillations and the neural structures that support synchronized responses to musical rhythm. Synchronized neural activity has been observed in auditory and motor networks, and has been linked with attentional allocation and movement coordination. Here we describe a neurodynamic model that shows how self-organization of oscillations in interacting sensory and motor networks could be responsible for the formation of the pulse percept in complex rhythms. In a pulse synchronization study, we test the model's key prediction that pulse can be perceived at a frequency for which no spectral energy is present in the amplitude envelope of the acoustic rhythm. The result shows that participants perceive the pulse at the theoretically predicted frequency. This model is one of the few consistent with neurophysiological evidence on the role of neural oscillation, and it explains a phenomenon that other computational models fail to explain. Because it is based on a canonical model, the predictions hold for an entire family of dynamical systems, not only a specific one. Thus, this model provides a theoretical link between oscillatory neurodynamics and the induction of pulse and meter in musical rhythm.

  16. Deep Gate Recurrent Neural Network

    Science.gov (United States)

    2016-11-22

    distribution, e.g. a particular book. In this experiment, we use a collection of writings by Nietzsche to train our network. In total, this corpus contains...sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics : Human Language Technologies, pages 142–150...Portland, Oregon, USA, June 2011. Association for Com- putational Linguistics . URL http://www.aclweb.org/anthology/P11-1015. Maja J Matari, Complex

  17. Evaluating the potential of artificial neural network and neuro-fuzzy techniques for estimating antioxidant activity and anthocyanin content of sweet cherry during ripening by using image processing.

    Science.gov (United States)

    Taghadomi-Saberi, Saeedeh; Omid, Mahmoud; Emam-Djomeh, Zahra; Ahmadi, Hojjat

    2014-01-15

    This paper presents a versatile way for estimating antioxidant activity and anthocyanin content at different ripening stages of sweet cherry by combining image processing and two artificial intelligence (AI) techniques. In comparison with common time-consuming laboratory methods for determining these important attributes, this new way is economical and much faster. The accuracy of artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS) models was studied to estimate the outputs. Sensitivity analysis and principal component analysis were used with ANN and ANFIS respectively to specify the most effective attributes on outputs. Among the designed ANNs, two hidden layer networks with 11-14-9-1 and 11-6-20-1 architectures had the highest correlation coefficients and lowest error values for modeling antioxidant activity (R = 0.93) and anthocyanin content (R = 0.98) respectively. ANFIS models with triangular and two-term Gaussian membership functions gave the best results for antioxidant activity (R = 0.87) and anthocyanin content (R = 0.90) respectively. Comparison of the models showed that ANN outperformed ANFIS for this case. By considering the advantages of the applied system and the accuracy obtained in somewhat similar studies, it can be concluded that both techniques presented here have good potential to be used as estimators of proposed attributes. © 2013 Society of Chemical Industry.

  18. A Projection Neural Network for Constrained Quadratic Minimax Optimization.

    Science.gov (United States)

    Liu, Qingshan; Wang, Jun

    2015-11-01

    This paper presents a projection neural network described by a dynamic system for solving constrained quadratic minimax programming problems. Sufficient conditions based on a linear matrix inequality are provided for global convergence of the proposed neural network. Compared with some of the existing neural networks for quadratic minimax optimization, the proposed neural network in this paper is capable of solving more general constrained quadratic minimax optimization problems, and the designed neural network does not include any parameter. Moreover, the neural network has lower model complexities, the number of state variables of which is equal to that of the dimension of the optimization problems. The simulation results on numerical examples are discussed to demonstrate the effectiveness and characteristics of the proposed neural network.

  19. Neural Networks in R Using the Stuttgart Neural Network Simulator: RSNNS

    Directory of Open Access Journals (Sweden)

    Christopher Bergmeir

    2012-01-01

    Full Text Available Neural networks are important standard machine learning procedures for classification and regression. We describe the R package RSNNS that provides a convenient interface to the popular Stuttgart Neural Network Simulator SNNS. The main features are (a encapsulation of the relevant SNNS parts in a C++ class, for sequential and parallel usage of different networks, (b accessibility of all of the SNNSalgorithmic functionality from R using a low-level interface, and (c a high-level interface for convenient, R-style usage of many standard neural network procedures. The package also includes functions for visualization and analysis of the models and the training procedures, as well as functions for data input/output from/to the original SNNSfile formats.

  20. The relevance of network micro-structure for neural dynamics

    Directory of Open Access Journals (Sweden)

    Volker ePernice

    2013-06-01

    Full Text Available The activity of cortical neurons is determined by the input they receive from presynaptic neurons. Many previousstudies have investigated how specific aspects of the statistics of the input affect the spike trains of single neurons and neuronsin recurrent networks. However, typically very simple random network models are considered in such studies. Here weuse a recently developed algorithm to construct networks based on a quasi-fractal probability measure which are much morevariable than commonly used network models, and which therefore promise to sample the space of recurrent networks ina more exhaustive fashion than previously possible. We use the generated graphs as the underlying network topology insimulations of networks of integrate-and-fire neurons in an asynchronous and irregular state. Based on an extensive datasetof networks and neuronal simulations we assess statistical relations between features of the network structure and the spikingactivity. Our results highlight the strong influence that some details of the network structure have on the activity dynamics ofboth single neurons and populations, even if some global network parameters are kept fixed. We observe specific and consistentrelations between activity characteristics like spike-train irregularity or correlations and network properties, for example thedistributions of the numbers of in- and outgoing connections or clustering. Exploiting these relations, we demonstrate that itis possible to estimate structural characteristics of the network from activity data. We also assess higher order correlationsof spiking activity in the various networks considered here, and find that their occurrence strongly depends on the networkstructure. These results provide directions for further theoretical studies on recurrent networks, as well as new ways to interpretspike train recordings from neural circuits.

  1. Investment Valuation Analysis with Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Hüseyin İNCE

    2017-07-01

    Full Text Available This paper shows that discounted cash flow and net present value, which are traditional investment valuation models, can be combined with artificial neural network model forecasting. The main inputs for the valuation models, such as revenue, costs, capital expenditure, and their growth rates, are heavily related to sector dynamics and macroeconomics. The growth rates of those inputs are related to inflation and exchange rates. Therefore, predicting inflation and exchange rates is a critical issue for the valuation output. In this paper, the Turkish economy’s inflation rate and the exchange rate of USD/TRY are forecast by artificial neural networks and implemented to the discounted cash flow model. Finally, the results are benchmarked with conventional practices.

  2. Evaluating neural networks and artificial intelligence systems

    Science.gov (United States)

    Alberts, David S.

    1994-02-01

    Systems have no intrinsic value in and of themselves, but rather derive value from the contributions they make to the missions, decisions, and tasks they are intended to support. The estimation of the cost-effectiveness of systems is a prerequisite for rational planning, budgeting, and investment documents. Neural network and expert system applications, although similar in their incorporation of a significant amount of decision-making capability, differ from each other in ways that affect the manner in which they can be evaluated. Both these types of systems are, by definition, evolutionary systems, which also impacts their evaluation. This paper discusses key aspects of neural network and expert system applications and their impact on the evaluation process. A practical approach or methodology for evaluating a certain class of expert systems that are particularly difficult to measure using traditional evaluation approaches is presented.

  3. Artificial Neural Network for Displacement Vectors Determination

    Directory of Open Access Journals (Sweden)

    P. Bohmann

    1997-09-01

    Full Text Available An artificial neural network (NN for displacement vectors (DV determination is presented in this paper. DV are computed in areas which are essential for image analysis and computer vision, in areas where are edges, lines, corners etc. These special features are found by edges operators with the following filtration. The filtration is performed by a threshold function. The next step is DV computation by 2D Hamming artificial neural network. A method of DV computation is based on the full search block matching algorithms. The pre-processing (edges finding is the reason why the correlation function is very simple, the process of DV determination needs less computation and the structure of the NN is simpler.

  4. Neural Network Program Package for Prosody Modeling

    Directory of Open Access Journals (Sweden)

    J. Santarius

    2004-04-01

    Full Text Available This contribution describes the programme for one part of theautomatic Text-to-Speech (TTS synthesis. Some experiments (for example[14] documented the considerable improvement of the naturalness ofsynthetic speech, but this approach requires completing the inputfeature values by hand. This completing takes a lot of time for bigfiles. We need to improve the prosody by other approaches which useonly automatically classified features (input parameters. Theartificial neural network (ANN approach is used for the modeling ofprosody parameters. The program package contains all modules necessaryfor the text and speech signal pre-processing, neural network training,sensitivity analysis, result processing and a module for the creationof the input data protocol for Czech speech synthesizer ARTIC [1].

  5. Supervised Sequence Labelling with Recurrent Neural Networks

    CERN Document Server

    Graves, Alex

    2012-01-01

    Supervised sequence labelling is a vital area of machine learning, encompassing tasks such as speech, handwriting and gesture recognition, protein secondary structure prediction and part-of-speech tagging. Recurrent neural networks are powerful sequence learning tools—robust to input noise and distortion, able to exploit long-range contextual information—that would seem ideally suited to such problems. However their role in large-scale sequence labelling systems has so far been auxiliary.    The goal of this book is a complete framework for classifying and transcribing sequential data with recurrent neural networks only. Three main innovations are introduced in order to realise this goal. Firstly, the connectionist temporal classification output layer allows the framework to be trained with unsegmented target sequences, such as phoneme-level speech transcriptions; this is in contrast to previous connectionist approaches, which were dependent on error-prone prior segmentation. Secondly, multidimensional...

  6. Hierarchical Neural Network Structures for Phoneme Recognition

    CERN Document Server

    Vasquez, Daniel; Minker, Wolfgang

    2013-01-01

    In this book, hierarchical structures based on neural networks are investigated for automatic speech recognition. These structures are evaluated on the phoneme recognition task where a  Hybrid Hidden Markov Model/Artificial Neural Network paradigm is used. The baseline hierarchical scheme consists of two levels each which is based on a Multilayered Perceptron. Additionally, the output of the first level serves as a second level input. The computational speed of the phoneme recognizer can be substantially increased by removing redundant information still contained at the first level output. Several techniques based on temporal and phonetic criteria have been investigated to remove this redundant information. The computational time could be reduced by 57% whilst keeping the system accuracy comparable to the baseline hierarchical approach.

  7. Effective learning in recurrent max-min neural networks.

    Science.gov (United States)

    Loe, Kia Fock; Teow, Loo Nin

    1998-04-01

    Max and min operations have interesting properties that facilitate the exchange of information between the symbolic and real-valued domains. As such, neural networks that employ max-min activation functions have been a subject of interest in recent years. Since max-min functions are not strictly differentiable, we propose a mathematically sound learning method based on using Fourier convergence analysis of side-derivatives to derive a gradient descent technique for max-min error functions. We then propose a novel recurrent max-min neural network model that is trained to perform grammatical inference as an application example. Comparisons made between this model and recurrent sigmoidal neural networks show that our model not only performs better in terms of learning speed and generalization, but that its final weight configuration allows a deterministic finite automation (DFA) to be extracted in a straightforward manner. In essence, we are able to demonstrate that our proposed gradient descent technique does allow max-min neural networks to learn effectively.

  8. HAWC Energy Reconstruction via Neural Network

    Science.gov (United States)

    Marinelli, Samuel; HAWC Collaboration

    2016-03-01

    The High-Altitude Water-Cherenkov (HAWC) γ-ray observatory is located at 4100 m above sea level on the Sierra Negra mountain in the state of Puebla, Mexico. Its 300 water-filled tanks are instrumented with PMTs that detect Cherenkov light produced by charged particles in atmospheric air showers induced by TeV γ-rays. The detector became fully operational in March of 2015. With a 2-sr field of view and duty cycle exceeding 90%, HAWC is a survey instrument sensitive to diverse γ-ray sources, including supernova remnants, pulsar wind nebulae, active galactic nuclei, and others. Particle-acceleration mechanisms at these sources can be inferred by studying their energy spectra, particularly at high energies. We have developed a technique for estimating primary- γ-ray energies using an artificial neural network (ANN). Input variables to the ANN are selected to characterize shower multiplicity in the detector, the fraction of the shower contained in the detector, and atmospheric attenuation of the shower. Monte Carlo simulations show that the new estimator has superior performance to the current estimator used in HAWC publications. This work was supported by the National Science Foundation.

  9. Neural Network Solves "Traveling-Salesman" Problem

    Science.gov (United States)

    Thakoor, Anilkumar P.; Moopenn, Alexander W.

    1990-01-01

    Experimental electronic neural network solves "traveling-salesman" problem. Plans round trip of minimum distance among N cities, visiting every city once and only once (without backtracking). This problem is paradigm of many problems of global optimization (e.g., routing or allocation of resources) occuring in industry, business, and government. Applied to large number of cities (or resources), circuits of this kind expected to solve problem faster and more cheaply.

  10. Learning in Neural Networks: VLSI Implementation Strategies

    Science.gov (United States)

    Duong, Tuan Anh

    1995-01-01

    Fully-parallel hardware neural network implementations may be applied to high-speed recognition, classification, and mapping tasks in areas such as vision, or can be used as low-cost self-contained units for tasks such as error detection in mechanical systems (e.g. autos). Learning is required not only to satisfy application requirements, but also to overcome hardware-imposed limitations such as reduced dynamic range of connections.

  11. Convolutional Neural Networks for Font Classification

    OpenAIRE

    Tensmeyer, Chris; Saunders, Daniel; Martinez, Tony

    2017-01-01

    Classifying pages or text lines into font categories aids transcription because single font Optical Character Recognition (OCR) is generally more accurate than omni-font OCR. We present a simple framework based on Convolutional Neural Networks (CNNs), where a CNN is trained to classify small patches of text into predefined font classes. To classify page or line images, we average the CNN predictions over densely extracted patches. We show that this method achieves state-of-the-art performance...

  12. Deep Learning in Neural Networks: An Overview

    OpenAIRE

    Schmidhuber, Juergen

    2014-01-01

    In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarises relevant work, much of it from the previous millennium. Shallow and deep learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpr...

  13. A Dynamic Neural Network Approach to CBM

    Science.gov (United States)

    2011-03-15

    Therefore post-processing is needed to extract the time difference between corresponding events from which to calculate the crankshaft rotational speed...potentially already available from existing sensors (such as a crankshaft timing device) and a Neural Network processor to carry out the calculation . As...files are designated with the “_genmod” suffix. These files were the sources for the training and testing sets and made the extraction process easy

  14. Artificial neural network cardiopulmonary modeling and diagnosis

    Science.gov (United States)

    Kangas, Lars J.; Keller, Paul E.

    1997-01-01

    The present invention is a method of diagnosing a cardiopulmonary condition in an individual by comparing data from a progressive multi-stage test for the individual to a non-linear multi-variate model, preferably a recurrent artificial neural network having sensor fusion. The present invention relies on a cardiovascular model developed from physiological measurements of an individual. Any differences between the modeled parameters and the parameters of an individual at a given time are used for diagnosis.

  15. Identifying Tracks Duplicates via Neural Network

    CERN Document Server

    Sunjerga, Antonio; CERN. Geneva. EP Department

    2017-01-01

    The goal of the project is to study feasibility of state of the art machine learning techniques in track reconstruction. Machine learning techniques provide promising ways to speed up the pattern recognition of tracks by adding more intelligence in the algorithms. Implementation of neural network to process of track duplicates identifying will be discussed. Different approaches are shown and results are compared to method that is currently in use.

  16. Multilingual Text Detection with Nonlinear Neural Network

    Directory of Open Access Journals (Sweden)

    Lin Li

    2015-01-01

    Full Text Available Multilingual text detection in natural scenes is still a challenging task in computer vision. In this paper, we apply an unsupervised learning algorithm to learn language-independent stroke feature and combine unsupervised stroke feature learning and automatically multilayer feature extraction to improve the representational power of text feature. We also develop a novel nonlinear network based on traditional Convolutional Neural Network that is able to detect multilingual text regions in the images. The proposed method is evaluated on standard benchmarks and multilingual dataset and demonstrates improvement over the previous work.

  17. Forecasting Energy Commodity Prices Using Neural Networks

    Directory of Open Access Journals (Sweden)

    Massimo Panella

    2012-01-01

    Full Text Available A new machine learning approach for price modeling is proposed. The use of neural networks as an advanced signal processing tool may be successfully used to model and forecast energy commodity prices, such as crude oil, coal, natural gas, and electricity prices. Energy commodities have shown explosive growth in the last decade. They have become a new asset class used also for investment purposes. This creates a huge demand for better modeling as what occurred in the stock markets in the 1970s. Their price behavior presents unique features causing complex dynamics whose prediction is regarded as a challenging task. The use of a Mixture of Gaussian neural network may provide significant improvements with respect to other well-known models. We propose a computationally efficient learning of this neural network using the maximum likelihood estimation approach to calibrate the parameters. The optimal model is identified using a hierarchical constructive procedure that progressively increases the model complexity. Extensive computer simulations validate the proposed approach and provide an accurate description of commodities prices dynamics.

  18. Flood estimation: a neural network approach

    Energy Technology Data Exchange (ETDEWEB)

    Swain, P.C.; Seshachalam, C.; Umamahesh, N.V. [Regional Engineering Coll., Warangal (India). Water and Environment Div.

    2000-07-01

    The artificial neural network (ANN) approach described in this study aims at predicting the flood flow into a reservoir. This differs from the traditional methods of flow prediction in the sense that it belongs to a class of data driven approaches, where as the traditional methods are model driven. Physical processes influencing the occurrences of streamflow in a river are highly complex, and are very difficult to be modelled by available statistical or deterministic models. ANNs provide model free solutions and hence can be expected to be appropriate in these conditions. Non-linearity, adaptivity, evidential response and fault tolerance are additional properties and capabilities of the neural networks. This paper highlights the applicability of neural networks for predicting daily flood flow taking the Hirakud reservoir on river Mahanadi in Orissa, India as the case study. The correlation between the observed and predicted flows and the relative error are considered to measure the performance of the model. The correlation between the observed and the modelled flows are computed to be 0.9467 in testing phase of the model. (orig.)

  19. Identifying Broadband Rotational Spectra with Neural Networks

    Science.gov (United States)

    Zaleski, Daniel P.; Prozument, Kirill

    2017-06-01

    A typical broadband rotational spectrum may contain several thousand observable transitions, spanning many species. Identifying the individual spectra, particularly when the dynamic range reaches 1,000:1 or even 10,000:1, can be challenging. One approach is to apply automated fitting routines. In this approach, combinations of 3 transitions can be created to form a "triple", which allows fitting of the A, B, and C rotational constants in a Watson-type Hamiltonian. On a standard desktop computer, with a target molecule of interest, a typical AUTOFIT routine takes 2-12 hours depending on the spectral density. A new approach is to utilize machine learning to train a computer to recognize the patterns (frequency spacing and relative intensities) inherit in rotational spectra and to identify the individual spectra in a raw broadband rotational spectrum. Here, recurrent neural networks have been trained to identify different types of rotational spectra and classify them accordingly. Furthermore, early results in applying convolutional neural networks for spectral object recognition in broadband rotational spectra appear promising. Perez et al. "Broadband Fourier transform rotational spectroscopy for structure determination: The water heptamer." Chem. Phys. Lett., 2013, 571, 1-15. Seifert et al. "AUTOFIT, an Automated Fitting Tool for Broadband Rotational Spectra, and Applications to 1-Hexanal." J. Mol. Spectrosc., 2015, 312, 13-21. Bishop. "Neural networks for pattern recognition." Oxford university press, 1995.

  20. Artificial Neural Network Model for Predicting Compressive

    Directory of Open Access Journals (Sweden)

    Salim T. Yousif

    2013-05-01

    Full Text Available   Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature.    The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor  affecting the output of the model.     The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.

  1. Artificial neural network applications in ionospheric studies

    Directory of Open Access Journals (Sweden)

    L. R. Cander

    1998-06-01

    Full Text Available The ionosphere of Earth exhibits considerable spatial changes and has large temporal variability of various timescales related to the mechanisms of creation, decay and transport of space ionospheric plasma. Many techniques for modelling electron density profiles through entire ionosphere have been developed in order to solve the "age-old problem" of ionospheric physics which has not yet been fully solved. A new way to address this problem is by applying artificial intelligence methodologies to current large amounts of solar-terrestrial and ionospheric data. It is the aim of this paper to show by the most recent examples that modern development of numerical models for ionospheric monthly median long-term prediction and daily hourly short-term forecasting may proceed successfully applying the artificial neural networks. The performance of these techniques is illustrated with different artificial neural networks developed to model and predict the temporal and spatial variations of ionospheric critical frequency, f0F2 and Total Electron Content (TEC. Comparisons between results obtained by the proposed approaches and measured f0F2 and TEC data provide prospects for future applications of the artificial neural networks in ionospheric studies.

  2. Improved Extension Neural Network and Its Applications

    Directory of Open Access Journals (Sweden)

    Yu Zhou

    2014-01-01

    Full Text Available Extension neural network (ENN is a new neural network that is a combination of extension theory and artificial neural network (ANN. The learning algorithm of ENN is based on supervised learning algorithm. One of important issues in the field of classification and recognition of ENN is how to achieve the best possible classifier with a small number of labeled training data. Training data selection is an effective approach to solve this issue. In this work, in order to improve the supervised learning performance and expand the engineering application range of ENN, we use a novel data selection method based on shadowed sets to refine the training data set of ENN. Firstly, we use clustering algorithm to label the data and induce shadowed sets. Then, in the framework of shadowed sets, the samples located around each cluster centers (core data and the borders between clusters (boundary data are selected as training data. Lastly, we use selected data to train ENN. Compared with traditional ENN, the proposed improved ENN (IENN has a better performance. Moreover, IENN is independent of the supervised learning algorithms and initial labeled data. Experimental results verify the effectiveness and applicability of our proposed work.

  3. CALIBRATION OF ONLINE ANALYZERS USING NEURAL NETWORKS

    Energy Technology Data Exchange (ETDEWEB)

    Rajive Ganguli; Daniel E. Walsh; Shaohai Yu

    2003-12-05

    Neural networks were used to calibrate an online ash analyzer at the Usibelli Coal Mine, Healy, Alaska, by relating the Americium and Cesium counts to the ash content. A total of 104 samples were collected from the mine, with 47 being from screened coal, and the rest being from unscreened coal. Each sample corresponded to 20 seconds of coal on the running conveyor belt. Neural network modeling used the quick stop training procedure. Therefore, the samples were split into training, calibration and prediction subsets. Special techniques, using genetic algorithms, were developed to representatively split the sample into the three subsets. Two separate approaches were tried. In one approach, the screened and unscreened coal was modeled separately. In another, a single model was developed for the entire dataset. No advantage was seen from modeling the two subsets separately. The neural network method performed very well on average but not individually, i.e. though each prediction was unreliable, the average of a few predictions was close to the true average. Thus, the method demonstrated that the analyzers were accurate at 2-3 minutes intervals (average of 6-9 samples), but not at 20 seconds (each prediction).

  4. UAV Trajectory Modeling Using Neural Networks

    Science.gov (United States)

    Xue, Min

    2017-01-01

    Massive small unmanned aerial vehicles are envisioned to operate in the near future. While there are lots of research problems need to be addressed before dense operations can happen, trajectory modeling remains as one of the keys to understand and develop policies, regulations, and requirements for safe and efficient unmanned aerial vehicle operations. The fidelity requirement of a small unmanned vehicle trajectory model is high because these vehicles are sensitive to winds due to their small size and low operational altitude. Both vehicle control systems and dynamic models are needed for trajectory modeling, which makes the modeling a great challenge, especially considering the fact that manufactures are not willing to share their control systems. This work proposed to use a neural network approach for modelling small unmanned vehicle's trajectory without knowing its control system and bypassing exhaustive efforts for aerodynamic parameter identification. As a proof of concept, instead of collecting data from flight tests, this work used the trajectory data generated by a mathematical vehicle model for training and testing the neural network. The results showed great promise because the trained neural network can predict 4D trajectories accurately, and prediction errors were less than 2:0 meters in both temporal and spatial dimensions.

  5. Categorization in neural networks and prosopagnosia

    Science.gov (United States)

    Virasoro, M. A.

    1989-12-01

    Prosopagnosia is a syndrome characterized by a generalized difficulty to visually recognize individual patterns among those that are similar, and can therefore be said to belong to the same category. I suggest that the existence of this disfunction may be an important clue for understanding the categorization process in the brain. In this direction the performance of neural networks under random destruction of synapses is analysed. It is found that in almost every network that stores correlated patterns the coding of the discriminating details between individuals inside a class is more sensitive to noise or to random destruction than the coding that distinguishes between classes. It follows that a process of death and/or deterioration at an intermediate level of intensity, even if it acts randomly on the network may lead to a malfunctioning of the network that resembles prosopagnosia.

  6. Explicit neural representations, recursive neural networks and conscious visual perception

    National Research Council Canada - National Science Library

    Pollen, Daniel A

    2003-01-01

    ... network remains unresolved. We inquire as to whether recursive processing-by which we mean the combined flow and integrated outcome of afferent and recurrent activity across a series of cortical areas-is essential...

  7. Artificial Neural Network Analysis of Xinhui Pericarpium Citri ...

    African Journals Online (AJOL)

    Purpose: To develop an effective analytical method to distinguish old peels of Xinhui Pericarpium citri reticulatae (XPCR) stored for > 3 years from new peels stored for < 3 years. Methods: Artificial neural networks (ANN) models, including general regression neural network (GRNN) and multi-layer feedforward neural ...

  8. A novel neural network based image reconstruction model with scale and rotation invariance for target identification and classification for Active millimetre wave imaging

    Science.gov (United States)

    Agarwal, Smriti; Bisht, Amit Singh; Singh, Dharmendra; Pathak, Nagendra Prasad

    2014-12-01

    Millimetre wave imaging (MMW) is gaining tremendous interest among researchers, which has potential applications for security check, standoff personal screening, automotive collision-avoidance, and lot more. Current state-of-art imaging techniques viz. microwave and X-ray imaging suffers from lower resolution and harmful ionizing radiation, respectively. In contrast, MMW imaging operates at lower power and is non-ionizing, hence, medically safe. Despite these favourable attributes, MMW imaging encounters various challenges as; still it is very less explored area and lacks suitable imaging methodology for extracting complete target information. Keeping in view of these challenges, a MMW active imaging radar system at 60 GHz was designed for standoff imaging application. A C-scan (horizontal and vertical scanning) methodology was developed that provides cross-range resolution of 8.59 mm. The paper further details a suitable target identification and classification methodology. For identification of regular shape targets: mean-standard deviation based segmentation technique was formulated and further validated using a different target shape. For classification: probability density function based target material discrimination methodology was proposed and further validated on different dataset. Lastly, a novel artificial neural network based scale and rotation invariant, image reconstruction methodology has been proposed to counter the distortions in the image caused due to noise, rotation or scale variations. The designed neural network once trained with sample images, automatically takes care of these deformations and successfully reconstructs the corrected image for the test targets. Techniques developed in this paper are tested and validated using four different regular shapes viz. rectangle, square, triangle and circle.

  9. Human Face Identification using KL Transform and Neural Networks

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Yong Joo [LG Electronics Inc. Multimedia Research Lab. (Korea, Republic of); Ji, Seung Hwan [Mi Re Industry Inc. (Korea, Republic of); Yoo, Jae Hyung; Kim, Jung Hwan; Park, Min Yong [Yonsei University (Korea, Republic of)

    1999-01-01

    Machine recognition of faces from still and video images is emerging as an active research area spanning several disciplines such as image processing, pattern recognition, computer vision and neural networks. In addition, human face identification has numerous applications such as human interface based systems and real-time video systems of surveillance and security. In this paper, we propose an algorithm that can identify a particular individual face. We consider human face identification system in color space, which hasn`t often considered in conventional methods. In order to make the algorithm insensitive to luminance, we convert the conventional RGB coordinates into normalized CIE coordinates. The normalized-CIE-based facial images are KL-transformed. The transformed data are used as used as the input of multi-layered neural network and the network are trained using error-backpropagation methods. Finally, we verify the system performance of the proposed algorithm by experiments. (author). 12 refs., 7 figs., 3 tabs.

  10. Some structural determinants of Pavlovian conditioning in artificial neural networks.

    Science.gov (United States)

    Sánchez, José M; Galeazzi, Juan M; Burgos, José E

    2010-05-01

    This paper investigates the possible role of neuroanatomical features in Pavlovian conditioning, via computer simulations with layered, feedforward artificial neural networks. The networks' structure and functioning are described by a strongly bottom-up model that takes into account the roles of hippocampal and dopaminergic systems in conditioning. Neuroanatomical features were simulated as generic structural or architectural features of neural networks. We focused on the number of units per hidden layer and connectivity. The effect of the number of units per hidden layer was investigated through simulations of resistance to extinction in fully connected networks. Large networks were more resistant to extinction than small networks, a stochastic effect of the asynchronous random procedure used in the simulator to update activations and weights. These networks did not simulate second-order conditioning because weight competition prevented conditioning to a stimulus after conditioning to another. Partially connected networks simulated second-order conditioning and devaluation of the second-order stimulus after extinction of a similar first-order stimulus. Similar stimuli were simulated as nonorthogonal input-vectors. Copyright (c) 2009 Elsevier B.V. All rights reserved.

  11. 1991 IEEE International Joint Conference on Neural Networks, Singapore, Nov. 18-21, 1991, Proceedings. Vols. 1-3

    Energy Technology Data Exchange (ETDEWEB)

    1991-01-01

    The present conference the application of neural networks to associative memories, neurorecognition, hybrid systems, supervised and unsupervised learning, image processing, neurophysiology, sensation and perception, electrical neurocomputers, optimization, robotics, machine vision, sensorimotor control systems, and neurodynamics. Attention is given to such topics as optimal associative mappings in recurrent networks, self-improving associative neural network models, fuzzy activation functions, adaptive pattern recognition with sparse associative networks, efficient question-answering in a hybrid system, the use of abstractions by neural networks, remote-sensing pattern classification, speech recognition with guided propagation, inverse-step competitive learning, and rotational quadratic function neural networks. Also discussed are electrical load forecasting, evolutionarily stable and unstable strategies, the capacity of recurrent networks, neural net vs control theory, perceptrons for image recognition, storage capacity of bidirectional associative memories, associative random optimization for control, automatic synthesis of digital neural architectures, self-learning robot vision, and the associative dynamics of chaotic neural networks.

  12. Slow diffusive dynamics in a chaotic balanced neural network.

    Directory of Open Access Journals (Sweden)

    Nimrod Shaham

    2017-05-01

    Full Text Available It has been proposed that neural noise in the cortex arises from chaotic dynamics in the balanced state: in this model of cortical dynamics, the excitatory and inhibitory inputs to each neuron approximately cancel, and activity is driven by fluctuations of the synaptic inputs around their mean. It remains unclear whether neural networks in the balanced state can perform tasks that are highly sensitive to noise, such as storage of continuous parameters in working memory, while also accounting for the irregular behavior of single neurons. Here we show that continuous parameter working memory can be maintained in the balanced state, in a neural circuit with a simple network architecture. We show analytically that in the limit of an infinite network, the dynamics generated by this architecture are characterized by a continuous set of steady balanced states, allowing for the indefinite storage of a continuous parameter. In finite networks, we show that the chaotic noise drives diffusive motion along the approximate attractor, which gradually degrades the stored memory. We analyze the dynamics and show that the slow diffusive motion induces slowly decaying temporal cross correlations in the activity, which differ substantially from those previously described in the balanced state. We calculate the diffusivity, and show that it is inversely proportional to the system size. For large enough (but realistic neural population sizes, and with suitable tuning of the network connections, the proposed balanced network can sustain continuous parameter values in memory over time scales larger by several orders of magnitude than the single neuron time scale.

  13. UAV Trajectory Modeling Using Neural Networks

    Science.gov (United States)

    Xue, Min

    2017-01-01

    Large amount of small Unmanned Aerial Vehicles (sUAVs) are projected to operate in the near future. Potential sUAV applications include, but not limited to, search and rescue, inspection and surveillance, aerial photography and video, precision agriculture, and parcel delivery. sUAVs are expected to operate in the uncontrolled Class G airspace, which is at or below 500 feet above ground level (AGL), where many static and dynamic constraints exist, such as ground properties and terrains, restricted areas, various winds, manned helicopters, and conflict avoidance among sUAVs. How to enable safe, efficient, and massive sUAV operations at the low altitude airspace remains a great challenge. NASA's Unmanned aircraft system Traffic Management (UTM) research initiative works on establishing infrastructure and developing policies, requirement, and rules to enable safe and efficient sUAVs' operations. To achieve this goal, it is important to gain insights of future UTM traffic operations through simulations, where the accurate trajectory model plays an extremely important role. On the other hand, like what happens in current aviation development, trajectory modeling should also serve as the foundation for any advanced concepts and tools in UTM. Accurate models of sUAV dynamics and control systems are very important considering the requirement of the meter level precision in UTM operations. The vehicle dynamics are relatively easy to derive and model, however, vehicle control systems remain unknown as they are usually kept by manufactures as a part of intellectual properties. That brings challenges to trajectory modeling for sUAVs. How to model the vehicle's trajectories with unknown control system? This work proposes to use a neural network to model a vehicle's trajectory. The neural network is first trained to learn the vehicle's responses at numerous conditions. Once being fully trained, given current vehicle states, winds, and desired future trajectory, the neural

  14. Signaling in large-scale neural networks

    DEFF Research Database (Denmark)

    Berg, Rune W; Hounsgaard, Jørn

    2009-01-01

    We examine the recent finding that neurons in spinal motor circuits enter a high conductance state during functional network activity. The underlying concomitant increase in random inhibitory and excitatory synaptic activity leads to stochastic signal processing. The possible advantages of this m......We examine the recent finding that neurons in spinal motor circuits enter a high conductance state during functional network activity. The underlying concomitant increase in random inhibitory and excitatory synaptic activity leads to stochastic signal processing. The possible advantages...... of this metabolically costly organization are analyzed by comparing with synaptically less intense networks driven by the intrinsic response properties of the network neurons....

  15. Evolutionary Algorithms For Neural Networks Binary And Real Data Classification

    Directory of Open Access Journals (Sweden)

    Dr. Hanan A.R. Akkar

    2015-08-01

    Full Text Available Artificial neural networks are complex networks emulating the way human rational neurons process data. They have been widely used generally in prediction clustering classification and association. The training algorithms that used to determine the network weights are almost the most important factor that influence the neural networks performance. Recently many meta-heuristic and Evolutionary algorithms are employed to optimize neural networks weights to achieve better neural performance. This paper aims to use recently proposed algorithms for optimizing neural networks weights comparing these algorithms performance with other classical meta-heuristic algorithms used for the same purpose. However to evaluate the performance of such algorithms for training neural networks we examine such algorithms to classify four opposite binary XOR clusters and classification of continuous real data sets such as Iris and Ecoli.

  16. Runoff Modelling in Urban Storm Drainage by Neural Networks

    DEFF Research Database (Denmark)

    Rasmussen, Michael R.; Brorsen, Michael; Schaarup-Jensen, Kjeld

    1995-01-01

    A neural network is used to simulate folw and water levels in a sewer system. The calibration of th neural network is based on a few measured events and the network is validated against measureed events as well as flow simulated with the MOUSE model (Lindberg and Joergensen, 1986). The neural...... network is used to compute flow or water level at selected points in the sewer system, and to forecast the flow from a small residential area. The main advantages of the neural network are the build-in self calibration procedure and high speed performance, but the neural network cannot be used to extract...... knowledge of the runoff process. The neural network was found to simulate 150 times faster than e.g. the MOUSE model....

  17. Network traffic anomaly prediction using Artificial Neural Network

    Science.gov (United States)

    Ciptaningtyas, Hening Titi; Fatichah, Chastine; Sabila, Altea

    2017-03-01

    As the excessive increase of internet usage, the malicious software (malware) has also increase significantly. Malware is software developed by hacker for illegal purpose(s), such as stealing data and identity, causing computer damage, or denying service to other user[1]. Malware which attack computer or server often triggers network traffic anomaly phenomena. Based on Sophos's report[2], Indonesia is the riskiest country of malware attack and it also has high network traffic anomaly. This research uses Artificial Neural Network (ANN) to predict network traffic anomaly based on malware attack in Indonesia which is recorded by Id-SIRTII/CC (Indonesia Security Incident Response Team on Internet Infrastructure/Coordination Center). The case study is the highest malware attack (SQL injection) which has happened in three consecutive years: 2012, 2013, and 2014[4]. The data series is preprocessed first, then the network traffic anomaly is predicted using Artificial Neural Network and using two weight update algorithms: Gradient Descent and Momentum. Error of prediction is calculated using Mean Squared Error (MSE) [7]. The experimental result shows that MSE for SQL Injection is 0.03856. So, this approach can be used to predict network traffic anomaly.

  18. Constructing general partial differential equations using polynomial and neural networks.

    Science.gov (United States)

    Zjavka, Ladislav; Pedrycz, Witold

    2016-01-01

    Sum fraction terms can approximate multi-variable functions on the basis of discrete observations, replacing a partial differential equation definition with polynomial elementary data relation descriptions. Artificial neural networks commonly transform the weighted sum of inputs to describe overall similarity relationships of trained and new testing input patterns. Differential polynomial neural networks form a new class of neural networks, which construct and solve an unknown general partial differential equation of a function of interest with selected substitution relative terms using non-linear multi-variable composite polynomials. The layers of the network generate simple and composite relative substitution terms whose convergent series combinations can describe partial dependent derivative changes of the input variables. This regression is based on trained generalized partial derivative data relations, decomposed into a multi-layer polynomial network structure. The sigmoidal function, commonly used as a nonlinear activation of artificial neurons, may transform some polynomial items together with the parameters with the aim to improve the polynomial derivative term series ability to approximate complicated periodic functions, as simple low order polynomials are not able to fully make up for the complete cycles. The similarity analysis facilitates substitutions for differential equations or can form dimensional units from data samples to describe real-world problems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. A modular architecture for transparent computation in recurrent neural networks.

    Science.gov (United States)

    Carmantini, Giovanni S; Beim Graben, Peter; Desroches, Mathieu; Rodrigues, Serafim

    2017-01-01

    Computation is classically studied in terms of automata, formal languages and algorithms; yet, the relation between neural dynamics and symbolic representations and operations is still unclear in traditional eliminative connectionism. Therefore, we suggest a unique perspective on this central issue, to which we would like to refer as transparent connectionism, by proposing accounts of how symbolic computation can be implemented in neural substrates. In this study we first introduce a new model of dynamics on a symbolic space, the versatile shift, showing that it supports the real-time simulation of a range of automata. We then show that the Gödelization of versatile shifts defines nonlinear dynamical automata, dynamical systems evolving on a vectorial space. Finally, we present a mapping between nonlinear dynamical automata and recurrent artificial neural networks. The mapping defines an architecture characterized by its granular modularity, where data, symbolic operations and their control are not only distinguishable in activation space, but also spatially localizable in the network itself, while maintaining a distributed encoding of symbolic representations. The resulting networks simulate automata in real-time and are programmed directly, in the absence of network training. To discuss the unique characteristics of the architecture and their consequences, we present two examples: (i) the design of a Central Pattern Generator from a finite-state locomotive controller, and (ii) the creation of a network simulating a system of interactive automata that supports the parsing of garden-path sentences as investigated in psycholinguistics experiments. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Neural network controller for underwater work ROV. Suichu sagyoyo ROV no neural network controller

    Energy Technology Data Exchange (ETDEWEB)

    Yoshida, Y.; Kidoshi, H.; Arahata, M.; Shoji, K.; Takahashi, Y. (Ishikawajima-Harima Heavy Industries, Co. Ltd., Tokyo (Japan))

    1993-07-01

    The previous underwater work ROV (remotely operated vehicle) has been controlled manually because its dynamic properties are changeable underwater. Ishikawajima-Harima Heavy Industries (IHI) has applied a neural network to an adaptive controller for the ROV. This paper describes objectives of the research, design of control logic, and tank experiments on a model ROV. For the neural network, manual operation was used to provide the initial learning data for the neural network in order to initialize control parameters for optimization. The model ROV was designed to achieve and maintain constant depth in normal operation. As a consequence of the tank experiments, it was demonstrated that the controller can acquire skill of operators, can further improve the acquired skill of operators, and can construct an automatic control system autonomically even if any dynamic properties are not known. 6 refs., 8 figs.

  1. Neural networks for beat perception in musical rhythm

    Directory of Open Access Journals (Sweden)

    Edward W Large

    2015-11-01

    Full Text Available Entrainment of cortical rhythms to acoustic rhythms has been hypothesized to be the neural correlate of pulse and meter perception in music. Dynamic attending theory first proposed synchronization of endogenous perceptual rhythms nearly forty years ago, but only recently has the pivotal role of neural synchrony been demonstrated. Significant progress has since been made in understanding the role of neural oscillations and the neural structures that support synchronized responses to musical rhythm. Synchronized neural activity has been observed in auditory and motor networks, and has been linked with attentional allocation and movement coordination. Here we describe a neurodynamic model that shows how self-organization of oscillations in interacting sensory and motor networks could be responsible for the formation of the pulse percept in complex rhythms. We test the model's prediction that pulse can be perceived at a frequency for which no spectral energy is present in the amplitude envelope of the acoustic rhythm. The result provides a theoretical link between oscillatory neurodynamics and the induction of pulse and meter in musical rhythm.

  2. Optical neural network system for pose determination of spinning satellites

    Science.gov (United States)

    Lee, Andrew; Casasent, David

    1990-01-01

    An optical neural network architecture and algorithm based on a Hopfield optimization network are presented for multitarget tracking. This tracker utilizes a neuron for every possible target track, and a quadratic energy function of neural activities which is minimized using gradient descent neural evolution. The neural net tracker is demonstrated as part of a system for determining position and orientation (pose) of spinning satellites with respect to a robotic spacecraft. The input to the system is time sequence video from a single camera. Novelty detection and filtering are utilized to locate and segment novel regions from the input images. The neural net multitarget tracker determines the correspondences (or tracks) of the novel regions as a function of time, and hence the paths of object (satellite) parts. The path traced out by a given part or region is approximately elliptical in image space, and the position, shape and orientation of the ellipse are functions of the satellite geometry and its pose. Having a geometric model of the satellite, and the elliptical path of a part in image space, the three-dimensional pose of the satellite is determined. Digital simulation results using this algorithm are presented for various satellite poses and lighting conditions.

  3. Neural Network Model of memory retrieval

    Directory of Open Access Journals (Sweden)

    Stefano eRecanatesi

    2015-12-01

    Full Text Available Human memory can store large amount of information. Nevertheless, recalling is often achallenging task. In a classical free recall paradigm, where participants are asked to repeat abriefly presented list of words, people make mistakes for lists as short as 5 words. We present amodel for memory retrieval based on a Hopfield neural network where transition between itemsare determined by similarities in their long-term memory representations. Meanfield analysis ofthe model reveals stable states of the network corresponding (1 to single memory representationsand (2 intersection between memory representations. We show that oscillating feedback inhibitionin the presence of noise induces transitions between these states triggering the retrieval ofdifferent memories. The network dynamics qualitatively predicts the distribution of time intervalsrequired to recall new memory items observed in experiments. It shows that items having largernumber of neurons in their representation are statistically easier to recall and reveals possiblebottlenecks in our ability of retrieving memories. Overall, we propose a neural network model ofinformation retrieval broadly compatible with experimental observations and is consistent with ourrecent graphical model (Romani et al., 2013.

  4. Tumor diagnosis using the backpropagation neural network method

    Science.gov (United States)

    Ma, Lixing; Sukuta, Sydney; Bruch, Reinhard F.; Afanasyeva, Natalia I.; Looney, Carl G.

    1998-04-01

    For characterization of skin cancer, an artificial neural network method has been developed to diagnose normal tissue, benign tumor and melanoma. The pattern recognition is based on a three-layer neural network fuzzy learning system. In this study, the input neuron data set is the Fourier transform IR spectrum obtained by a new fiberoptic evanescent wave Fourier transform IR spectroscopy method in the range of 1480 to 1850 cm-1. Ten input features are extracted from the absorbency values in this region. A single hidden layer of neural nodes with sigmoids activation functions clusters the feature space into small subclasses and the output nodes are separated in different nonconvex classes to permit nonlinear discrimination of disease states. The output is classified as three classes: normal tissue, benign tumor and melanoma. The results obtained from the neural network pattern recognition are shown to be consistent with traditional medical diagnosis. Input features have also been extracted from the absorbency spectra using chemical factor analysis. These abstract features or factors are also used in the classification.

  5. Neural Network Control of Asymmetrical Multilevel Converters

    Directory of Open Access Journals (Sweden)

    Patrice WIRA

    2009-12-01

    Full Text Available This paper proposes a neural implementation of a harmonic eliminationstrategy (HES to control a Uniform Step Asymmetrical Multilevel Inverter(USAMI. The mapping between the modulation rate and the requiredswitching angles is learned and approximated with a Multi-Layer Perceptron(MLP neural network. After learning, appropriate switching angles can bedetermined with the neural network leading to a low-computational-costneural controller which is well suited for real-time applications. Thistechnique can be applied to multilevel inverters with any number of levels. Asan example, a nine-level inverter and an eleven-level inverter are consideredand the optimum switching angles are calculated on-line. Comparisons to thewell-known sinusoidal pulse-width modulation (SPWM have been carriedout in order to evaluate the performance of the proposed approach. Simulationresults demonstrate the technical advantages of the proposed neuralimplementation over the conventional method (SPWM in eliminatingharmonics while controlling a nine-level and eleven-level USAMI. Thisneural approach is applied for the supply of an asynchronous machine andresults show that it ensures a highest quality torque by efficiently cancelingthe harmonics generated by the inverters.

  6. Neural Network Approach To Sensory Fusion

    Science.gov (United States)

    Pearson, John C.; Gelfand, Jack J.; Sullivan, W. E.; Peterson, Richard M.; Spence, Clay D.

    1988-08-01

    We present a neural network model for sensory fusion based on the design of the visual/acoustic target localiza-tion system of the barn owl. This system adaptively fuses its separate visual and acoustic representations of object position into a single joint representation used for head orientation. The building block in this system, as in much of the brain, is the neuronal map. Neuronal maps are large arrays of locally interconnected neurons that represent information in a map-like form, that is, parameter values are systematically encoded by the position of neural activation in the array. The computational load is distributed to a hierarchy of maps, and the computation is performed in stages by transforming the representation from map to map via the geometry of the projections between the maps and the local interactions within the maps. For example, azimuthal position is computed from the frequency and binaural phase information encoded in the signals of the acoustic sensors, while elevation is computed in a separate stream using binaural intensity information. These separate streams are merged in their joint projection onto the external nucleus of the inferior colliculus, a two dimensional array of cells which contains a map of acoustic space. This acoustic map, and the visual map of the retina, jointly project onto the optic tectum, creating a fused visual/acoustic representation of position in space that is used for object localization. In this paper we describe our mathematical model of the stage of visual/acoustic fusion in the optic tectum. The model assumes that the acoustic projection from the external nucleus onto the tectum is roughly topographic and one-to-many, while the visual projection from the retina onto the tectum is topographic and one-to-one. A simple process of self-organization alters the strengths of the acoustic connections, effectively forming a focused beam of strong acoustic connections whose inputs are coincident with the visual inputs

  7. Flood routing modelling with Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    R. Peters

    2006-01-01

    Full Text Available For the modelling of the flood routing in the lower reaches of the Freiberger Mulde river and its tributaries the one-dimensional hydrodynamic modelling system HEC-RAS has been applied. Furthermore, this model was used to generate a database to train multilayer feedforward networks. To guarantee numerical stability for the hydrodynamic modelling of some 60 km of streamcourse an adequate resolution in space requires very small calculation time steps, which are some two orders of magnitude smaller than the input data resolution. This leads to quite high computation requirements seriously restricting the application – especially when dealing with real time operations such as online flood forecasting. In order to solve this problem we tested the application of Artificial Neural Networks (ANN. First studies show the ability of adequately trained multilayer feedforward networks (MLFN to reproduce the model performance.

  8. Granular neural networks, pattern recognition and bioinformatics

    CERN Document Server

    Pal, Sankar K; Ganivada, Avatharam

    2017-01-01

    This book provides a uniform framework describing how fuzzy rough granular neural network technologies can be formulated and used in building efficient pattern recognition and mining models. It also discusses the formation of granules in the notion of both fuzzy and rough sets. Judicious integration in forming fuzzy-rough information granules based on lower approximate regions enables the network to determine the exactness in class shape as well as to handle the uncertainties arising from overlapping regions, resulting in efficient and speedy learning with enhanced performance. Layered network and self-organizing analysis maps, which have a strong potential in big data, are considered as basic modules,. The book is structured according to the major phases of a pattern recognition system (e.g., classification, clustering, and feature selection) with a balanced mixture of theory, algorithm, and application. It covers the latest findings as well as directions for future research, particularly highlighting bioinf...

  9. Neuromodulatory connectivity defines the structure of a behavioral neural network.

    Science.gov (United States)

    Diao, Feici; Elliott, Amicia D; Diao, Fengqiu; Shah, Sarav; White, Benjamin H

    2017-11-22

    Neural networks are typically defined by their synaptic connectivity, yet synaptic wiring diagrams often provide limited insight into network function. This is due partly to the importance of non-synaptic communication by neuromodulators, which can dynamically reconfigure circuit activity to alter its output. Here, we systematically map the patterns of neuromodulatory connectivity in a network that governs a developmentally critical behavioral sequence in Drosophila. This sequence, which mediates pupal ecdysis, is governed by the serial release of several key factors, which act both somatically as hormones and within the brain as neuromodulators. By identifying and characterizing the functions of the neuronal targets of these factors, we find that they define hierarchically organized layers of the network controlling the pupal ecdysis sequence: a modular input layer, an intermediate central pattern generating layer, and a motor output layer. Mapping neuromodulatory connections in this system thus defines the functional architecture of the network.

  10. Explaining Deep Convolutional Neural Networks on Music Classification

    OpenAIRE

    Choi, Keunwoo; Fazekas, George; Sandler, Mark

    2016-01-01

    Deep convolutional neural networks (CNNs) have been actively adopted in the field of music information retrieval, e.g. genre classification, mood detection, and chord recognition. However, the process of learning and prediction is little understood, particularly when it is applied to spectrograms. We introduce auralisation of a CNN to understand its underlying mechanism, which is based on a deconvolution procedure introduced in [2]. Auralisation of a CNN is converting the learned convolutiona...

  11. The Usage of Neural Networks for the Medical Diagnosis

    OpenAIRE

    Malyshevska, Kateryna

    2009-01-01

    The problem of cancer diagnosis from multi-channel images using the neural networks is investigated. The goal of this work is to classify the different tissue types which are used to determine the cancer risk. The radial basis function networks and backpropagation neural networks are used for classification. The results of experiments are presented.

  12. Daily Nigerian peak load forecasting using artificial neural network ...

    African Journals Online (AJOL)

    A daily peak load forecasting technique that uses artificial neural network with seasonal indices is presented in this paper. A neural network of relatively smaller size than the main prediction network is used to predict the daily peak load for a period of one year over which the actual daily load data are available using one ...

  13. Prediction of Parametric Roll Resonance by Multilayer Perceptron Neural Network

    DEFF Research Database (Denmark)

    Míguez González, M; López Peña, F.; Díaz Casás, V.

    2011-01-01

    acknowledged in the last few years. This work proposes a prediction system based on a multilayer perceptron (MP) neural network. The training and testing of the MP network is accomplished by feeding it with simulated data of a three degrees-of-freedom nonlinear model of a fishing vessel. The neural network...

  14. Advances in Artificial Neural Networks - Methodological Development and Application

    Science.gov (United States)

    Artificial neural networks as a major soft-computing technology have been extensively studied and applied during the last three decades. Research on backpropagation training algorithms for multilayer perceptron networks has spurred development of other neural network training algorithms for other ne...

  15. Particle swarm optimization of a neural network model in a ...

    Indian Academy of Sciences (India)

    This paper presents a particle swarm optimization (PSO) technique to train an artificial neural network (ANN) for prediction of flank wear in drilling, and compares the network performance with that of the back propagation neural network (BPNN). This analysis is carried out following a series of experiments employing high ...

  16. Survey on Neural Networks Used for Medical Image Processing.

    Science.gov (United States)

    Shi, Zhenghao; He, Lifeng; Suzuki, Kenji; Nakamura, Tsuyoshi; Itoh, Hidenori

    2009-02-01

    This paper aims to present a review of neural networks used in medical image processing. We classify neural networks by its processing goals and the nature of medical images. Main contributions, advantages, and drawbacks of the methods are mentioned in the paper. Problematic issues of neural network application for medical image processing and an outlook for the future research are also discussed. By this survey, we try to answer the following two important questions: (1) What are the major applications of neural networks in medical image processing now and in the nearby future? (2) What are the major strengths and weakness of applying neural networks for solving medical image processing tasks? We believe that this would be very helpful researchers who are involved in medical image processing with neural network techniques.

  17. Permeability prediction in shale gas reservoirs using Neural Network

    Science.gov (United States)

    Aliouane, Leila; Ouadfeul, Sid-Ali

    2017-04-01

    Here, we suggest the use of the artificial neural network for permeability prediction in shale gas reservoirs using artificial neural network. Prediction of Permeability in shale gas reservoirs is a complicated task that requires new models where Darcy's fluid flow model is not suitable. Proposed idea is based on the training of neural network machine using the set of well-logs data as an input and the measured permeability as an output. In this case the Multilayer Perceptron neural network machines is used with Levenberg Marquardt algorithm. Application to two horizontal wells drilled in the Barnett shale formation exhibit the power of neural network model to resolve such as problem. Keywords: Artificial neural network, permeability, prediction , shale gas.

  18. Financial Time Series Prediction Using Elman Recurrent Random Neural Networks

    Science.gov (United States)

    Wang, Jie; Wang, Jun; Fang, Wen; Niu, Hongli

    2016-01-01

    In recent years, financial market dynamics forecasting has been a focus of economic research. To predict the price indices of stock markets, we developed an architecture which combined Elman recurrent neural networks with stochastic time effective function. By analyzing the proposed model with the linear regression, complexity invariant distance (CID), and multiscale CID (MCID) analysis methods and taking the model compared with different models such as the backpropagation neural network (BPNN), the stochastic time effective neural network (STNN), and the Elman recurrent neural network (ERNN), the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices. PMID:27293423

  19. Age and the neural network of personal familiarity.

    Directory of Open Access Journals (Sweden)

    Markus Donix

    Full Text Available BACKGROUND: Accessing information that defines personally familiar context in real-world situations is essential for the social interactions and the independent functioning of an individual. Personal familiarity is associated with the availability of semantic and episodic information as well as the emotional meaningfulness surrounding a stimulus. These features are known to be associated with neural activity in distinct brain regions across different stimulus conditions (e.g., when perceiving faces, voices, places, objects, which may reflect a shared neural basis. Although perceiving context-rich personal familiarity may appear unchanged in aging on the behavioral level, it has not yet been studied whether this can be supported by neuroimaging data. METHODOLOGY/PRINCIPAL FINDINGS: We used functional magnetic resonance imaging to investigate the neural network associated with personal familiarity during the perception of personally familiar faces and places. Twelve young and twelve elderly cognitively healthy subjects participated in the study. Both age groups showed a similar activation pattern underlying personal familiarity, predominantly in anterior cingulate and posterior cingulate cortices, irrespective of the stimulus type. The young subjects, but not the elderly subjects demonstrated an additional anterior cingulate deactivation when perceiving unfamiliar stimuli. CONCLUSIONS/SIGNIFICANCE: Although we found evidence for an age-dependent reduction in frontal cortical deactivation, our data show that there is a stimulus-independent neural network associated with personal familiarity of faces and places, which is less susceptible to aging-related changes.

  20. Feedforward Backpropagation Neural Networks in Prediction of Farmer Risk Preferences

    OpenAIRE

    Kastens, Terry L.; Featherstone, Allen M.

    1996-01-01

    An out-of-sample prediction of Kansas farmers' responses to five surveyed questions involving risk is used to compare ordered multinomial logistic regression models with feedforward backpropagation neural network models. Although the logistic models often predict more accurately than the neural network models in a mean-squared error sense, the neural network models are shown to be more accommodating of loss functions associated with a desire to predict certain combinations of categorical resp...

  1. Classification of behavior using unsupervised temporal neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Adair, K.L. [Florida State Univ., Tallahassee, FL (United States). Dept. of Computer Science; Argo, P. [Los Alamos National Lab., NM (United States)

    1998-03-01

    Adding recurrent connections to unsupervised neural networks used for clustering creates a temporal neural network which clusters a sequence of inputs as they appear over time. The model presented combines the Jordan architecture with the unsupervised learning technique Adaptive Resonance Theory, Fuzzy ART. The combination yields a neural network capable of quickly clustering sequential pattern sequences as the sequences are generated. The applicability of the architecture is illustrated through a facility monitoring problem.

  2. Pixel-wise Segmentation of Street with Neural Networks

    OpenAIRE

    Bittel, Sebastian; Kaiser, Vitali; Teichmann, Marvin; Thoma, Martin

    2015-01-01

    Pixel-wise street segmentation of photographs taken from a drivers perspective is important for self-driving cars and can also support other object recognition tasks. A framework called SST was developed to examine the accuracy and execution time of different neural networks. The best neural network achieved an $F_1$-score of 89.5% with a simple feedforward neural network which trained to solve a regression task.

  3. Survey on Neural Networks Used for Medical Image Processing

    OpenAIRE

    Shi, Zhenghao; He, Lifeng; Suzuki, Kenji; Nakamura, Tsuyoshi; Itoh, Hidenori

    2009-01-01

    This paper aims to present a review of neural networks used in medical image processing. We classify neural networks by its processing goals and the nature of medical images. Main contributions, advantages, and drawbacks of the methods are mentioned in the paper. Problematic issues of neural network application for medical image processing and an outlook for the future research are also discussed. By this survey, we try to answer the following two important questions: (1) Wh...

  4. One pass learning for generalized classifier neural network.

    Science.gov (United States)

    Ozyildirim, Buse Melis; Avci, Mutlu

    2016-01-01

    Generalized classifier neural network introduced as a kind of radial basis function neural network, uses gradient descent based optimized smoothing parameter value to provide efficient classification. However, optimization consumes quite a long time and may cause a drawback. In this work, one pass learning for generalized classifier neural network is proposed to overcome this disadvantage. Proposed method utilizes standard deviation of each class to calculate corresponding smoothing parameter. Since different datasets may have different standard deviations and data distributions, proposed method tries to handle these differences by defining two functions for smoothing parameter calculation. Thresholding is applied to determine which function will be used. One of these functions is defined for datasets having different range of values. It provides balanced smoothing parameters for these datasets through logarithmic function and changing the operation range to lower boundary. On the other hand, the other function calculates smoothing parameter value for classes having standard deviation smaller than the threshold value. Proposed method is tested on 14 datasets and performance of one pass learning generalized classifier neural network is compared with that of probabilistic neural network, radial basis function neural network, extreme learning machines, and standard and logarithmic learning generalized classifier neural network in MATLAB environment. One pass learning generalized classifier neural network provides more than a thousand times faster classification than standard and logarithmic generalized classifier neural network. Due to its classification accuracy and speed, one pass generalized classifier neural network can be considered as an efficient alternative to probabilistic neural network. Test results show that proposed method overcomes computational drawback of generalized classifier neural network and may increase the classification performance. Copyright

  5. Neural networks analysis on SSME vibration simulation data

    Science.gov (United States)

    Lo, Ching F.; Wu, Kewei

    1993-01-01

    The neural networks method is applied to investigate the feasibility in detecting anomalies in turbopump vibration of SSME to supplement the statistical method utilized in the prototype system. The investigation of neural networks analysis is conducted using SSME vibration data from a NASA developed numerical simulator. The limited application of neural networks to the HPFTP has also shown the effectiveness in diagnosing the anomalies of turbopump vibrations.

  6. A Neural Network-Based Interval Pattern Matcher

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2015-07-01

    Full Text Available One of the most important roles in the machine learning area is to classify, and neural networks are very important classifiers. However, traditional neural networks cannot identify intervals, let alone classify them. To improve their identification ability, we propose a neural network-based interval matcher in our paper. After summarizing the theoretical construction of the model, we take a simple and a practical weather forecasting experiment, which show that the recognizer accuracy reaches 100% and that is promising.

  7. Discrete Orthogonal Transforms and Neural Networks for Image Interpolation

    Directory of Open Access Journals (Sweden)

    J. Polec

    1999-09-01

    Full Text Available In this contribution we present transform and neural network approaches to the interpolation of images. From transform point of view, the principles from [1] are modified for 1st and 2nd order interpolation. We present several new interpolation discrete orthogonal transforms. From neural network point of view, we present interpolation possibilities of multilayer perceptrons. We use various configurations of neural networks for 1st and 2nd order interpolation. The results are compared by means of tables.

  8. Neural Networks for Modeling and Control of Particle Accelerators

    CERN Document Server

    Edelen, A.L.; Chase, B.E.; Edstrom, D.; Milton, S.V.; Stabile, P.

    2016-01-01

    We describe some of the challenges of particle accelerator control, highlight recent advances in neural network techniques, discuss some promising avenues for incorporating neural networks into particle accelerator control systems, and describe a neural network-based control system that is being developed for resonance control of an RF electron gun at the Fermilab Accelerator Science and Technology (FAST) facility, including initial experimental results from a benchmark controller.

  9. Training product unit neural networks with genetic algorithms

    Science.gov (United States)

    Janson, D. J.; Frenzel, J. F.; Thelen, D. C.

    1991-01-01

    The training of product neural networks using genetic algorithms is discussed. Two unusual neural network techniques are combined; product units are employed instead of the traditional summing units and genetic algorithms train the network rather than backpropagation. As an example, a neural netork is trained to calculate the optimum width of transistors in a CMOS switch. It is shown how local minima affect the performance of a genetic algorithm, and one method of overcoming this is presented.

  10. Wave transmission prediction of multilayer floating breakwater using neural network

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Patil, S.G.; Hegde, A.V.

    in unison to solve a specific problem. The network learns through examples, so it requires good examples to train properly and further a trained network model can be used for prediction purpose. Proceedings of ICOE 2009 Wave transmission... prediction of multilayer floating breakwater using neural network 577 In order to allow the network to learn both non-linear and linear relationships between input nodes and output nodes, multiple-layer neural networks are often used...

  11. Neural predictive control for active buffet alleviation

    Science.gov (United States)

    Pado, Lawrence E.; Lichtenwalner, Peter F.; Liguore, Salvatore L.; Drouin, Donald

    1998-06-01

    The adaptive neural control of aeroelastic response (ANCAR) and the affordable loads and dynamics independent research and development (IRAD) programs at the Boeing Company jointly examined using neural network based active control technology for alleviating undesirable vibration and aeroelastic response in a scale model aircraft vertical tail. The potential benefits of adaptive control includes reducing aeroelastic response associated with buffet and atmospheric turbulence, increasing flutter margins, and reducing response associated with nonlinear phenomenon like limit cycle oscillations. By reducing vibration levels and thus loads, aircraft structures can have lower acquisition cost, reduced maintenance, and extended lifetimes. Wind tunnel tests were undertaken on a rigid 15% scale aircraft in Boeing's mini-speed wind tunnel, which is used for testing at very low air speeds up to 80 mph. The model included a dynamically scaled flexible fail consisting of an aluminum spar with balsa wood cross sections with a hydraulically powered rudder. Neural predictive control was used to actuate the vertical tail rudder in response to strain gauge feedback to alleviate buffeting effects. First mode RMS strain reduction of 50% was achieved. The neural predictive control system was developed and implemented by the Boeing Company to provide an intelligent, adaptive control architecture for smart structures applications with automated synthesis, self-optimization, real-time adaptation, nonlinear control, and fault tolerance capabilities. It is designed to solve complex control problems though a process of automated synthesis, eliminating costly control design and surpassing it in many instances by accounting for real world non-linearities.

  12. Parameterizing Stellar Spectra Using Deep Neural Networks

    Science.gov (United States)

    Li, Xiang-Ru; Pan, Ru-Yang; Duan, Fu-Qing

    2017-03-01

    Large-scale sky surveys are observing massive amounts of stellar spectra. The large number of stellar spectra makes it necessary to automatically parameterize spectral data, which in turn helps in statistically exploring properties related to the atmospheric parameters. This work focuses on designing an automatic scheme to estimate effective temperature ({T}{eff}), surface gravity ({log}g) and metallicity [Fe/H] from stellar spectra. A scheme based on three deep neural networks (DNNs) is proposed. This scheme consists of the following three procedures: first, the configuration of a DNN is initialized using a series of autoencoder neural networks; second, the DNN is fine-tuned using a gradient descent scheme; third, three atmospheric parameters {T}{eff}, {log}g and [Fe/H] are estimated using the computed DNNs. The constructed DNN is a neural network with six layers (one input layer, one output layer and four hidden layers), for which the number of nodes in the six layers are 3821, 1000, 500, 100, 30 and 1, respectively. This proposed scheme was tested on both real spectra and theoretical spectra from Kurucz’s new opacity distribution function models. Test errors are measured with mean absolute errors (MAEs). The errors on real spectra from the Sloan Digital Sky Survey (SDSS) are 0.1477, 0.0048 and 0.1129 dex for {log}g, {log}{T}{eff} and [Fe/H] (64.85 K for {T}{eff}), respectively. Regarding theoretical spectra from Kurucz’s new opacity distribution function models, the MAE of the test errors are 0.0182, 0.0011 and 0.0112 dex for {log}g, {log}{T}{eff} and [Fe/H] (14.90 K for {T}{eff}), respectively.

  13. Precipitation Nowcast using Deep Recurrent Neural Network

    Science.gov (United States)

    Akbari Asanjan, A.; Yang, T.; Gao, X.; Hsu, K. L.; Sorooshian, S.

    2016-12-01

    An accurate precipitation nowcast (0-6 hours) with a fine temporal and spatial resolution has always been an important prerequisite for flood warning, streamflow prediction and risk management. Most of the popular approaches used for forecasting precipitation can be categorized into two groups. One type of precipitation forecast relies on numerical modeling of the physical dynamics of atmosphere and another is based on empirical and statistical regression models derived by local hydrologists or meteorologists. Given the recent advances in artificial intelligence, in this study a powerful Deep Recurrent Neural Network, termed as Long Short-Term Memory (LSTM) model, is creatively used to extract the patterns and forecast the spatial and temporal variability of Cloud Top Brightness Temperature (CTBT) observed from GOES satellite. Then, a 0-6 hours precipitation nowcast is produced using a Precipitation Estimation from Remote Sensing Information using Artificial Neural Network (PERSIANN) algorithm, in which the CTBT nowcast is used as the PERSIANN algorithm's raw inputs. Two case studies over the continental U.S. have been conducted that demonstrate the improvement of proposed approach as compared to a classical Feed Forward Neural Network and a couple simple regression models. The advantages and disadvantages of the proposed method are summarized with regard to its capability of pattern recognition through time, handling of vanishing gradient during model learning, and working with sparse data. The studies show that the LSTM model performs better than other methods, and it is able to learn the temporal evolution of the precipitation events through over 1000 time lags. The uniqueness of PERSIANN's algorithm enables an alternative precipitation nowcast approach as demonstrated in this study, in which the CTBT prediction is produced and used as the inputs for generating precipitation nowcast.

  14. Advances in Artificial Neural Networks – Methodological Development and Application

    Directory of Open Access Journals (Sweden)

    Yanbo Huang

    2009-08-01

    Full Text Available Artificial neural networks as a major soft-computing technology have been extensively studied and applied during the last three decades. Research on backpropagation training algorithms for multilayer perceptron networks has spurred development of other neural network training algorithms for other networks such as radial basis function, recurrent network, feedback network, and unsupervised Kohonen self-organizing network. These networks, especially the multilayer perceptron network with a backpropagation training algorithm, have gained recognition in research and applications in various scientific and engineering areas. In order to accelerate the training process and overcome data over-fitting, research has been conducted to improve the backpropagation algorithm. Further, artificial neural networks have been integrated with other advanced methods such as fuzzy logic and wavelet analysis, to enhance the ability of data interpretation and modeling and to avoid subjectivity in the operation of the training algorithm. In recent years, support vector machines have emerged as a set of high-performance supervised generalized linear classifiers in parallel with artificial neural networks. A review on development history of artificial neural networks is presented and the standard architectures and algorithms of artificial neural networks are described. Furthermore, advanced artificial neural networks will be introduced with support vector machines, and limitations of ANNs will be identified. The future of artificial neural network development in tandem with support vector machines will be discussed in conjunction with further applications to food science and engineering, soil and water relationship for crop management, and decision support for precision agriculture. Along with the network structures and training algorithms, the applications of artificial neural networks will be reviewed as well, especially in the fields of agricultural and biological

  15. Robustness of the ATLAS pixel clustering neural network algorithm

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00407780; The ATLAS collaboration

    2016-01-01

    Proton-proton collisions at the energy frontier puts strong constraints on track reconstruction algorithms. In the ATLAS track reconstruction algorithm, an artificial neural network is utilised to identify and split clusters of neighbouring read-out elements in the ATLAS pixel detector created by multiple charged particles. The robustness of the neural network algorithm is presented, probing its sensitivity to uncertainties in the detector conditions. The robustness is studied by evaluating the stability of the algorithm's performance under a range of variations in the inputs to the neural networks. Within reasonable variation magnitudes, the neural networks prove to be robust to most variation types.

  16. Decoding small surface codes with feedforward neural networks

    Science.gov (United States)

    Varsamopoulos, Savvas; Criger, Ben; Bertels, Koen

    2018-01-01

    Surface codes reach high error thresholds when decoded with known algorithms, but the decoding time will likely exceed the available time budget, especially for near-term implementations. To decrease the decoding time, we reduce the decoding problem to a classification problem that a feedforward neural network can solve. We investigate quantum error correction and fault tolerance at small code distances using neural network-based decoders, demonstrating that the neural network can generalize to inputs that were not provided during training and that they can reach similar or better decoding performance compared to previous algorithms. We conclude by discussing the time required by a feedforward neural network decoder in hardware.

  17. Optical-Correlator Neural Network Based On Neocognitron

    Science.gov (United States)

    Chao, Tien-Hsin; Stoner, William W.

    1994-01-01

    Multichannel optical correlator implements shift-invariant, high-discrimination pattern-recognizing neural network based on paradigm of neocognitron. Selected as basic building block of this neural network because invariance under shifts is inherent advantage of Fourier optics included in optical correlators in general. Neocognitron is conceptual electronic neural-network model for recognition of visual patterns. Multilayer processing achieved by iteratively feeding back output of feature correlator to input spatial light modulator and updating Fourier filters. Neural network trained by use of characteristic features extracted from target images. Multichannel implementation enables parallel processing of large number of selected features.

  18. Material procedure quality forecast based on genetic BP neural network

    Science.gov (United States)

    Zheng, Bao-Hua

    2017-07-01

    Material procedure quality forecast plays an important role in quality control. This paper proposes a prediction model based on genetic algorithm (GA) and back propagation (BP) neural network. It can obtain the initial weights and thresholds of optimized BP neural network with the GA global search ability. A material process quality prediction model with the optimized BP neural network is adopted to predict the error of future process to measure the accuracy of process quality. The results show that the proposed method has the advantages of high accuracy and fast convergence rate compared with BP neural network.

  19. Neural network models: Insights and prescriptions from practical applications

    Energy Technology Data Exchange (ETDEWEB)

    Samad, T. [Honeywell Technology Center, Minneapolis, MN (United States)

    1995-12-31

    Neural networks are no longer just a research topic; numerous applications are now testament to their practical utility. In the course of developing these applications, researchers and practitioners have been faced with a variety of issues. This paper briefly discusses several of these, noting in particular the rich connections between neural networks and other, more conventional technologies. A more comprehensive version of this paper is under preparation that will include illustrations on real examples. Neural networks are being applied in several different ways. Our focus here is on neural networks as modeling technology. However, much of the discussion is also relevant to other types of applications such as classification, control, and optimization.

  20. Power converters and AC electrical drives with linear neural networks

    CERN Document Server

    Cirrincione, Maurizio

    2012-01-01

    The first book of its kind, Power Converters and AC Electrical Drives with Linear Neural Networks systematically explores the application of neural networks in the field of power electronics, with particular emphasis on the sensorless control of AC drives. It presents the classical theory based on space-vectors in identification, discusses control of electrical drives and power converters, and examines improvements that can be attained when using linear neural networks. The book integrates power electronics and electrical drives with artificial neural networks (ANN). Organized into four parts,

  1. Liquefaction Microzonation of Babol City Using Artificial Neural Network

    DEFF Research Database (Denmark)

    Farrokhzad, F.; Choobbasti, A.J.; Barari, Amin

    2012-01-01

    that will be less susceptible to damage during earthquakes. The scope of present study is to prepare the liquefaction microzonation map for the Babol city based on Seed and Idriss (1983) method using artificial neural network. Artificial neural network (ANN) is one of the artificial intelligence (AI) approaches...... is proposed in this paper. To meet this objective, an effort is made to introduce a total of 30 boreholes data in an area of 7 km2 which includes the results of field tests into the neural network model and the prediction of artificial neural network is checked in some test boreholes, finally the liquefaction...

  2. A hardware implementation of neural network with modified HANNIBAL architecture

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Bum youb; Chung, Duck Jin [Inha University, Inchon (Korea, Republic of)

    1996-03-01

    A digital hardware architecture for artificial neural network with learning capability is described in this paper. It is a modified hardware architecture known as HANNIBAL(Hardware Architecture for Neural Networks Implementing Back propagation Algorithm Learning). For implementing an efficient neural network hardware, we analyzed various type of multiplier which is major function block of neuro-processor cell. With this result, we design a efficient digital neural network hardware using serial/parallel multiplier, and test the operation. We also analyze the hardware efficiency with logic level simulation. (author). 14 refs., 10 figs., 3 tabs.

  3. Neural network and its application to CT imaging

    Energy Technology Data Exchange (ETDEWEB)

    Nikravesh, M.; Kovscek, A.R.; Patzek, T.W. [Lawrence Berkeley National Lab., CA (United States)] [and others

    1997-02-01

    We present an integrated approach to imaging the progress of air displacement by spontaneous imbibition of oil into sandstone. We combine Computerized Tomography (CT) scanning and neural network image processing. The main aspects of our approach are (I) visualization of the distribution of oil and air saturation by CT, (II) interpretation of CT scans using neural networks, and (III) reconstruction of 3-D images of oil saturation from the CT scans with a neural network model. Excellent agreement between the actual images and the neural network predictions is found.

  4. Adaptive thresholds for neural networks with synaptic noise.

    Science.gov (United States)

    Bollé, D; Heylen, R

    2007-08-01

    The inclusion of a macroscopic adaptive threshold is studied for the retrieval dynamics of both layered feedforward and fully connected neural network models with synaptic noise. These two types of architectures require a different method to be solved numerically. In both cases it is shown that, if the threshold is chosen appropriately as a function of the cross-talk noise and of the activity of the stored patterns, adapting itself automatically in the course of the recall process, an autonomous functioning of the network is guaranteed. This self-control mechanism considerably improves the quality of retrieval, in particular the storage capacity, the basins of attraction and the mutual information content.

  5. Methylphenidate modulates activity within cognitive neural networks of patients with post-stroke major depression: A placebo-controlled fMRI study

    Directory of Open Access Journals (Sweden)

    Rajamannar Ramasubbu

    2008-10-01

    Full Text Available Rajamannar Ramasubbu1, Bradley G Goodyear21Departments of Psychiatry and Clinical Neurosciences; 2Department of Radiology and Clinical Neurosciences, University of Calgary, Hotchkiss Brain Institute, Calgary, AB, CanadaBackground: Methylphenidate (MP is a dopamine- and noradrenaline-enhancing agent beneficial for post-stroke depression (PSD and stroke recovery due to its therapeutic effects on cognition, motivation, and mood; however, the neural mechanisms underlying its clinical effects remain unknown. This study used functional magnetic resonance imaging (fMRI to investigate the effect of MP on brain activity in response to cognitive tasks in patients with PSD.Methods: Nine stroke outpatients with DSM IV defined major depression underwent fMRI during two cognitive tasks (2-back and serial subtraction on four occasions, on the first and third day of a three-day treatment of MP and placebo. Nine healthy control (HC subjects matched for age and sex scanned during a single session served as normative data for comparison. The main outcome measure was cognitive task-dependent brain activity.Results: For the 2-back task, left prefrontal, right parietal, posterior cingulate, and temporal and bilateral cerebellar regions exhibited significantly greater activity during the MP condition relative to placebo. Less activity was detected in rostral prefrontal and left parietal regions. For serial subtraction, greater activity was detected in medial prefrontal, biparietal, bitemporal, posterior cingulate, and bilateral cerebellar regions, as well as thalamus, putamen, and insula. Further, underactivation observed during the placebo condition relative to HC improved or reversed during MP treatment. No significant differences in behavioral measures were found between MP and placebo conditions or between patients and HC.Conclusions: Short-term MP treatment may improve and normalize activity in cognitive neuronal networks in patients with PSD

  6. Ocean wave forecasting using recurrent neural networks

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Prabaharan, N.

    to the biological neurons, works on the input and output passing through a hidden layer. The ANN used here is a data- oriented modeling technique to find relations between input and output patterns by self learning and without any fixed mathematical form assumed... = 1/p ? Ep (2) Where, Ep = ? ? (Tk ?Ok)2 (3) p is the total number of training patterns; Tk is the actual output and Ok is the predicted output at kth output node. In the learning process of backpropagation neural network...

  7. Convolutional neural networks and face recognition task

    Science.gov (United States)

    Sochenkova, A.; Sochenkov, I.; Makovetskii, A.; Vokhmintsev, A.; Melnikov, A.

    2017-09-01

    Computer vision tasks are remaining very important for the last couple of years. One of the most complicated problems in computer vision is face recognition that could be used in security systems to provide safety and to identify person among the others. There is a variety of different approaches to solve this task, but there is still no universal solution that would give adequate results in some cases. Current paper presents following approach. Firstly, we extract an area containing face, then we use Canny edge detector. On the next stage we use convolutional neural networks (CNN) to finally solve face recognition and person identification task.

  8. Convolution neural networks for ship type recognition

    Science.gov (United States)

    Rainey, Katie; Reeder, John D.; Corelli, Alexander G.

    2016-05-01

    Algorithms to automatically recognize ship type from satellite imagery are desired for numerous maritime applications. This task is difficult, and example imagery accurately labeled with ship type is hard to obtain. Convolutional neural networks (CNNs) have shown promise in image recognition settings, but many of these applications rely on the availability of thousands of example images for training. This work attempts to under- stand for which types of ship recognition tasks CNNs might be well suited. We report the results of baseline experiments applying a CNN to several ship type classification tasks, and discuss many of the considerations that must be made in approaching this problem.

  9. Artificial Neural Network applied to lightning flashes

    Science.gov (United States)

    Gin, R. B.; Guedes, D.; Bianchi, R.

    2013-05-01

    The development of video cameras enabled cientists to study lightning discharges comportment with more precision. The main goal of this project is to create a system able to detect images of lightning discharges stored in videos and classify them using an Artificial Neural Network (ANN)using C Language and OpenCV libraries. The developed system, can be split in two different modules: detection module and classification module. The detection module uses OpenCV`s computer vision libraries and image processing techniques to detect if there are significant differences between frames in a sequence, indicating that something, still not classified, occurred. Whenever there is a significant difference between two consecutive frames, two main algorithms are used to analyze the frame image: brightness and shape algorithms. These algorithms detect both shape and brightness of the event, removing irrelevant events like birds, as well as detecting the relevant events exact position, allowing the system to track it over time. The classification module uses a neural network to classify the relevant events as horizontal or vertical lightning, save the event`s images and calculates his number of discharges. The Neural Network was implemented using the backpropagation algorithm, and was trained with 42 training images , containing 57 lightning events (one image can have more than one lightning). TheANN was tested with one to five hidden layers, with up to 50 neurons each. The best configuration achieved a success rate of 95%, with one layer containing 20 neurons (33 test images with 42 events were used in this phase). This configuration was implemented in the developed system to analyze 20 video files, containing 63 lightning discharges previously manually detected. Results showed that all the lightning discharges were detected, many irrelevant events were unconsidered, and the event's number of discharges was correctly computed. The neural network used in this project achieved a

  10. Defect detection on videos using neural network

    Directory of Open Access Journals (Sweden)

    Sizyakin Roman

    2017-01-01

    Full Text Available In this paper, we consider a method for defects detection in a video sequence, which consists of three main steps; frame compensation, preprocessing by a detector, which is base on the ranking of pixel values, and the classification of all pixels having anomalous values using convolutional neural networks. The effectiveness of the proposed method shown in comparison with the known techniques on several frames of the video sequence with damaged in natural conditions. The analysis of the obtained results indicates the high efficiency of the proposed method. The additional use of machine learning as postprocessing significantly reduce the likelihood of false alarm.

  11. Comparison of ultrasonic with stirrer performance for removal of sunset yellow (SY) by activated carbon prepared from wood of orange tree: artificial neural network modeling.

    Science.gov (United States)

    Ghaedi, A M; Ghaedi, M; Karami, P

    2015-03-05

    The present work focused on the removal of sunset yellow (SY) dye from aqueous solution by ultrasound-assisted adsorption and stirrer by activated carbon prepared from wood of an orange tree. Also, the artificial neural network (ANN) model was used for predicting removal (%) of SY dye based on experimental data. In this study a green approach was described for the synthesis of activated carbon prepared from wood of an orange tree and usability of it for the removal of sunset yellow. This material was characterized using scanning electron microscopy (SEM) and transmission electron microscopy (TEM). The impact of variables, including initial dye concentration (mg/L), pH, adsorbent dosage (g), sonication time (min) and temperature (°C) on SY removal were studied. Fitting the experimental equilibrium data of different isotherm models such as Langmuir, Freundlich, Temkin and Dubinin-Radushkevich models display the suitability and applicability of the Langmuir model. Analysis of experimental adsorption data by different kinetic models including pseudo-first and second order, Elovich and intraparticle diffusion models indicate the applicability of the second-order equation model. The adsorbent (0.5g) is applicable for successful removal of SY (>98%) in short time (10min) under ultrasound condition. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Comparison of ultrasonic with stirrer performance for removal of sunset yellow (SY) by activated carbon prepared from wood of orange tree: Artificial neural network modeling

    Science.gov (United States)

    Ghaedi, A. M.; Ghaedi, M.; Karami, P.

    2015-03-01

    The present work focused on the removal of sunset yellow (SY) dye from aqueous solution by ultrasound-assisted adsorption and stirrer by activated carbon prepared from wood of an orange tree. Also, the artificial neural network (ANN) model was used for predicting removal (%) of SY dye based on experimental data. In this study a green approach was described for the synthesis of activated carbon prepared from wood of an orange tree and usability of it for the removal of sunset yellow. This material was characterized using scanning electron microscopy (SEM) and transmission electron microscopy (TEM). The impact of variables, including initial dye concentration (mg/L), pH, adsorbent dosage (g), sonication time (min) and temperature (°C) on SY removal were studied. Fitting the experimental equilibrium data of different isotherm models such as Langmuir, Freundlich, Temkin and Dubinin-Radushkevich models display the suitability and applicability of the Langmuir model. Analysis of experimental adsorption data by different kinetic models including pseudo-first and second order, Elovich and intraparticle diffusion models indicate the applicability of the second-order equation model. The adsorbent (0.5 g) is applicable for successful removal of SY (>98%) in short time (10 min) under ultrasound condition.

  13. Artificial neural network for prediction of antigenic activity for a major conformational epitope in the hepatitis C virus NS3 protein.

    Science.gov (United States)

    Lara, James; Wohlhueter, Robert M; Dimitrova, Zoya; Khudyakov, Yury E

    2008-09-01

    Insufficient knowledge of general principles for accurate quantitative inference of biological properties from sequences is a major obstacle in the rationale design of proteins with predetermined activities. Due to this deficiency, protein engineering frequently relies on the use of computational approaches focused on the identification of quantitative structure-activity relationship (SAR) for each specific task. In the current article, a computational model was developed to define SAR for a major conformational antigenic epitope of the hepatitis C virus (HCV) non-structural protein 3 (NS3) in order to facilitate a rationale design of HCV antigens with improved diagnostically relevant properties. We present an artificial neural network (ANN) model that connects changes in the antigenic properties and structure of HCV NS3 recombinant proteins representing all 6 HCV genotypes. The ANN performed quantitative predictions of the enzyme immunoassay (EIA) Signal/Cutoff (S/Co) profiles from sequence information alone with 89.8% accuracy. Amino acid positions and physicochemical factors strongly associated with the HCV NS3 antigenic properties were identified. The positions most significantly contributing to the model were mapped on the NS3 3D structure. The location of these positions validates the major associations found by the ANN model between antigenicity and structure of the HCV NS3 proteins. Matlab code is available at the following URL address: http://bio-ai.myeweb.net/box_widget.html

  14. Sonar discrimination of cylinders from different angles using neural networks neural networks

    DEFF Research Database (Denmark)

    Andersen, Lars Nonboe; Au, Whiwlow; Larsen, Jan

    1999-01-01

    This paper describes an underwater object discrimination system applied to recognize cylinders of various compositions from different angles. The system is based on a new combination of simulated dolphin clicks, simulated auditory filters and artificial neural networks. The model demonstrates its...

  15. Evolving Spiking Neural Networks for Control of Artificial Creatures

    Directory of Open Access Journals (Sweden)

    Arash Ahmadi

    2013-10-01

    Full Text Available To understand and analysis behavior of complicated and intelligent organisms, scientists apply bio-inspired concepts including evolution and learning to mathematical models and analyses. Researchers utilize these perceptions in different applications, searching for improved methods andapproaches for modern computational systems. This paper presents a genetic algorithm based evolution framework in which Spiking Neural Network (SNN of artificial creatures are evolved for higher chance of survival in a virtual environment. The artificial creatures are composed ofrandomly connected Izhikevich spiking reservoir neural networks using population activity rate coding. Inspired by biological neurons, the neuronal connections are considered with different axonal conduction delays. Simulations results prove that the evolutionary algorithm has thecapability to find or synthesis artificial creatures which can survive in the environment successfully.

  16. Stable architectures for deep neural networks

    Science.gov (United States)

    Haber, Eldad; Ruthotto, Lars

    2018-01-01

    Deep neural networks have become invaluable tools for supervised machine learning, e.g. classification of text or images. While often offering superior results over traditional techniques and successfully expressing complicated patterns in data, deep architectures are known to be challenging to design and train such that they generalize well to new data. Critical issues with deep architectures are numerical instabilities in derivative-based learning algorithms commonly called exploding or vanishing gradients. In this paper, we propose new forward propagation techniques inspired by systems of ordinary differential equations (ODE) that overcome this challenge and lead to well-posed learning problems for arbitrarily deep networks. The backbone of our approach is our interpretation of deep learning as a parameter estimation problem of nonlinear dynamical systems. Given this formulation, we analyze stability and well-posedness of deep learning and use this new understanding to develop new network architectures. We relate the exploding and vanishing gradient phenomenon to the stability of the discrete ODE and present several strategies for stabilizing deep learning for very deep networks. While our new architectures restrict the solution space, several numerical experiments show their competitiveness with state-of-the-art networks.

  17. Infant Joint Attention, Neural Networks and Social Cognition

    Science.gov (United States)

    Mundy, Peter; Jarrold, William

    2010-01-01

    Neural network models of attention can provide a unifying approach to the study of human cognitive and emotional development (Posner & Rothbart, 2007). This paper we argue that a neural networks approach to the infant development of joint attention can inform our understanding of the nature of human social learning, symbolic thought process and social cognition. At its most basic, joint attention involves the capacity to coordinate one’s own visual attention with that of another person. We propose that joint attention development involves increments in the capacity to engage in simultaneous or parallel processing of information about one’s own attention and the attention of other people. Infant practice with joint attention is both a consequence and organizer of the development of a distributed and integrated brain network involving frontal and parietal cortical systems. This executive distributed network first serves to regulate the capacity of infants to respond to and direct the overt behavior of other people in order to share experience with others through the social coordination of visual attention. In this paper we describe this parallel and distributed neural network model of joint attention development and discuss two hypotheses that stem from this model. One is that activation of this distributed network during coordinated attention enhances to depth of information processing and encoding beginning in the first year of life. We also propose that with development joint attention becomes internalized as the capacity to socially coordinate mental attention to internal representations. As this occurs the executive joint attention network makes vital contributions to the development of human symbolic thinking and social cognition. PMID:20884172

  18. Programmable synaptic chip for electronic neural networks

    Science.gov (United States)

    Moopenn, A.; Langenbacher, H.; Thakoor, A. P.; Khanna, S. K.

    1988-01-01

    A binary synaptic matrix chip has been developed for electronic neural networks. The matrix chip contains a programmable 32X32 array of 'long channel' NMOSFET binary connection elements implemented in a 3-micron bulk CMOS process. Since the neurons are kept off-chip, the synaptic chip serves as a 'cascadable' building block for a multi-chip synaptic network as large as 512X512 in size. As an alternative to the programmable NMOSFET (long channel) connection elements, tailored thin film resistors are deposited, in series with FET switches, on some CMOS test chips, to obtain the weak synaptic connections. Although deposition and patterning of the resistors require additional processing steps, they promise substantial savings in silicon area. The performance of synaptic chip in a 32-neuron breadboard system in an associative memory test application is discussed.

  19. Dynamics of macro- and microscopic neural networks

    DEFF Research Database (Denmark)

    Mikkelsen, Kaare

    2014-01-01

    GN), which is a class of signals with a non-trivial low-frequency component. It is assumed that certain characteristica about the low-frequency component can yield information about the neural processes behind the signal. The method has been used in a range of different studies over the course of the past 10...... that the method continues to find use, of which examples are presented. In the second part of the thesis, numerical simulations of networks of neurons are described. To simplify the analysis, a relatively simpled neuron model - Leaky Integrate and Fire - is chosen. The strengths of the connections between...... shown that the syncronizing effect of the plasticity disappears when the strengths of the connections are frozen in time. Subsequently, the so-called ``Sisyphus'' mechanism is discussed, which is shown to cause slow fluctuations in the both the network synchronization and the strengths...

  20. A Convolutional Neural Network Neutrino Event Classifier

    CERN Document Server

    Aurisano, A; Rocco, D; Himmel, A; Messier, M D; Niner, E; Pawloski, G; Psihas, F; Sousa, A; Vahle, P

    2016-01-01

    Convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology without the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.

  1. Brain tumor segmentation with Deep Neural Networks.

    Science.gov (United States)

    Havaei, Mohammad; Davy, Axel; Warde-Farley, David; Biard, Antoine; Courville, Aaron; Bengio, Yoshua; Pal, Chris; Jodoin, Pierre-Marc; Larochelle, Hugo

    2017-01-01

    In this paper, we present a fully automatic brain tumor segmentation method based on Deep Neural Networks (DNNs). The proposed networks are tailored to glioblastomas (both low and high grade) pictured in MR images. By their very nature, these tumors can appear anywhere in the brain and have almost any kind of shape, size, and contrast. These reasons motivate our exploration of a machine learning solution that exploits a flexible, high capacity DNN while being extremely efficient. Here, we give a description of different model choices that we've found to be necessary for obtaining competitive performance. We explore in particular different architectures based on Convolutional Neural Networks (CNN), i.e. DNNs specifically adapted to image data. We present a novel CNN architecture which differs from those traditionally used in computer vision. Our CNN exploits both local features as well as more global contextual features simultaneously. Also, different from most traditional uses of CNNs, our networks use a final layer that is a convolutional implementation of a fully connected layer which allows a 40 fold speed up. We also describe a 2-phase training procedure that allows us to tackle difficulties related to the imbalance of tumor labels. Finally, we explore a cascade architecture in which the output of a basic CNN is treated as an additional source of information for a subsequent CNN. Results reported on the 2013 BRATS test data-set reveal that our architecture improves over the currently published state-of-the-art while being over 30 times faster. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. A growing and pruning sequential learning algorithm of hyper basis function neural network for function approximation.

    Science.gov (United States)

    Vuković, Najdan; Miljković, Zoran

    2013-10-01

    Radial basis function (RBF) neural network is constructed of certain number of RBF neurons, and these networks are among the most used neural networks for modeling of various nonlinear problems in engineering. Conventional RBF neuron is usually based on Gaussian type of activation function with single width for each activation function. This feature restricts neuron performance for modeling the complex nonlinear problems. To accommodate limitation of a single scale, this paper presents neural network with similar but yet different activation function-hyper basis function (HBF). The HBF allows different scaling of input dimensions to provide better generalization property when dealing with complex nonlinear problems in engineering practice. The HBF is based on generalization of Gaussian type of neuron that applies Mahalanobis-like distance as a distance metrics between input training sample and prototype vector. Compared to the RBF, the HBF neuron has more parameters to optimize, but HBF neural network needs less number of HBF neurons to memorize relationship between input and output sets in order to achieve good generalization property. However, recent research results of HBF neural network performance have shown that optimal way of constructing this type of neural network is needed; this paper addresses this issue and modifies sequential learning algorithm for HBF neural network that exploits the concept of neuron's significance and allows growing and pruning of HBF neuron during learning process. Extensive experimental study shows that HBF neural network, trained with developed learning algorithm, achieves lower prediction error and more compact neural network. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Neural bases of recommendations differ according to social network structure.

    Science.gov (United States)

    O'Donnell, Matthew Brook; Bayer, Joseph B; Cascio, Christopher N; Falk, Emily B

    2017-01-01

    Ideas spread across social networks, but not everyone is equally positioned to be a successful recommender. Do individuals with more opportunities to connect otherwise unconnected others-high information brokers-use their brains differently than low information brokers when making recommendations? We test the hypothesis that those with more opportunities for information brokerage may use brain systems implicated in considering the thoughts, perspectives, and mental states of others (i.e. 'mentalizing') more when spreading ideas. We used social network analysis to quantify individuals' opportunities for information brokerage. This served as a predictor of activity within meta-analytically defined neural regions associated with mentalizing (dorsomedial prefrontal cortex, temporal parietal junction, medial prefrontal cortex, /posterior cingulate cortex, middle temporal gyrus) as participants received feedback about peer opinions of mobile game apps. Higher information brokers exhibited more activity in this mentalizing network when receiving divergent peer feedback and updating their recommendation. These data support the idea that those in different network positions may use their brains differently to perform social tasks. Different social network positions might provide more opportunities to engage specific psychological processes. Or those who tend to engage such processes more may place themselves in systematically different network positions. These data highlight the value of integrating levels of analysis, from brain networks to social networks. © The Author (2017). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  4. Artificial neural networks in pancreatic disease.

    Science.gov (United States)

    Bartosch-Härlid, A; Andersson, B; Aho, U; Nilsson, J; Andersson, R

    2008-07-01

    An artificial neural network (ANNs) is a non-linear pattern recognition technique that is rapidly gaining in popularity in medical decision-making. This study investigated the use of ANNs for diagnostic and prognostic purposes in pancreatic disease, especially acute pancreatitis and pancreatic cancer. PubMed was searched for articles on the use of ANNs in pancreatic diseases using the MeSH terms 'neural networks (computer)', 'pancreatic neoplasms', 'pancreatitis' and 'pancreatic diseases'. A systematic review of the articles was performed. Eleven articles were identified, published between 1993 and 2007. The situations that lend themselves best to analysis by ANNs are complex multifactorial relationships, medical decisions when a second opinion is needed and when automated interpretation is required, for example in a situation of an inadequate number of experts. Conventional linear models have limitations in terms of diagnosis and prediction of outcome in acute pancreatitis and pancreatic cancer. Management of these disorders can be improved by applying ANNs to existing clinical parameters and newly established gene expression profiles. (c) 2008 British Journal of Surgery Society Ltd. Published by John Wiley & Sons, Ltd.

  5. BOUNDARY DEPTH INFORMATION USING HOPFIELD NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    S. Xu

    2016-06-01

    Full Text Available Depth information is widely used for representation, reconstruction and modeling of 3D scene. Generally two kinds of methods can obtain the depth information. One is to use the distance cues from the depth camera, but the results heavily depend on the device, and the accuracy is degraded greatly when the distance from the object is increased. The other one uses the binocular cues from the matching to obtain the depth information. It is more and more mature and convenient to collect the depth information of different scenes by stereo matching methods. In the objective function, the data term is to ensure that the difference between the matched pixels is small, and the smoothness term is to smooth the neighbors with different disparities. Nonetheless, the smoothness term blurs the boundary depth information of the object which becomes the bottleneck of the stereo matching. This paper proposes a novel energy function for the boundary to keep the discontinuities and uses the Hopfield neural network to solve the optimization. We first extract the region of interest areas which are the boundary pixels in original images. Then, we develop the boundary energy function to calculate the matching cost. At last, we solve the optimization globally by the Hopfield neural network. The Middlebury stereo benchmark is used to test the proposed method, and results show that our boundary depth information is more accurate than other state-of-the-art methods and can be used to optimize the results of other stereo matching methods.

  6. Maximum Entropy Approaches to Living Neural Networks

    Directory of Open Access Journals (Sweden)

    John M. Beggs

    2010-01-01

    Full Text Available Understanding how ensembles of neurons collectively interact will be a key step in developing a mechanistic theory of cognitive processes. Recent progress in multineuron recording and analysis techniques has generated tremendous excitement over the physiology of living neural networks. One of the key developments driving this interest is a new class of models based on the principle of maximum entropy. Maximum entropy models have been reported to account for spatial correlation structure in ensembles of neurons recorded from several different types of data. Importantly, these models require only information about the firing rates of individual neurons and their pairwise correlations. If this approach is generally applicable, it would drastically simplify the problem of understanding how neural networks behave. Given the interest in this method, several groups now have worked to extend maximum entropy models to account for temporal correlations. Here, we review how maximum entropy models have been applied to neuronal ensemble data to account for spatial and temporal correlations. We also discuss criticisms of the maximum entropy approach that argue that it is not generally applicable to larger ensembles of neurons. We conclude that future maximum entropy models will need to address three issues: temporal correlations, higher-order correlations, and larger ensemble sizes. Finally, we provide a brief list of topics for future research.

  7. Neural network analysis for hazardous waste characterization

    Energy Technology Data Exchange (ETDEWEB)

    Misra, M.; Pratt, L.Y.; Farris, C. [Colorado School of Mines, Golden, CO (United States)] [and others

    1995-12-31

    This paper is a summary of our work in developing a system for interpreting electromagnetic (EM) and magnetic sensor information from the dig face characterization experimental cell at INEL to determine the depth and nature of buried objects. This project contained three primary components: (1) development and evaluation of several geophysical interpolation schemes for correcting missing or noisy data, (2) development and evaluation of several wavelet compression schemes for removing redundancies from the data, and (3) construction of two neural networks that used the results of steps (1) and (2) to determine the depth and nature of buried objects. This work is a proof-of-concept study that demonstrates the feasibility of this approach. The resulting system was able to determine the nature of buried objects correctly 87% of the time and was able to locate a buried object to within an average error of 0.8 feet. These statistics were gathered based on a large test set and so can be considered reliable. Considering the limited nature of this study, these results strongly indicate the feasibility of this approach, and the importance of appropriate preprocessing of neural network input data.

  8. Object Classification Using Substance Based Neural Network

    Directory of Open Access Journals (Sweden)

    P. Sengottuvelan

    2014-01-01

    Full Text Available Object recognition has shown tremendous increase in the field of image analysis. The required set of image objects is identified and retrieved on the basis of object recognition. In this paper, we propose a novel classification technique called substance based image classification (SIC using a wavelet neural network. The foremost task of SIC is to remove the surrounding regions from an image to reduce the misclassified portion and to effectively reflect the shape of an object. At first, the image to be extracted is performed with SIC system through the segmentation of the image. Next, in order to attain more accurate information, with the extracted set of regions, the wavelet transform is applied for extracting the configured set of features. Finally, using the neural network classifier model, misclassification over the given natural images and further background images are removed from the given natural image using the LSEG segmentation. Moreover, to increase the accuracy of object classification, SIC system involves the removal of the regions in the surrounding image. Performance evaluation reveals that the proposed SIC system reduces the occurrence of misclassification and reflects the exact shape of an object to approximately 10–15%.

  9. Performance of artificial neural networks and genetical evolved artificial neural networks unfolding techniques

    Energy Technology Data Exchange (ETDEWEB)

    Ortiz R, J. M. [Escuela Politecnica Superior, Departamento de Electrotecnia y Electronica, Avda. Menendez Pidal s/n, Cordoba (Spain); Martinez B, M. R.; Vega C, H. R. [Universidad Autonoma de Zacatecas, Unidad Academica de Estudios Nucleares, Calle Cipres No. 10, Fracc. La Penuela, 98068 Zacatecas (Mexico); Gallego D, E.; Lorente F, A. [Universidad Politecnica de Madrid, Departamento de Ingenieria Nuclear, ETSI Industriales, C. Jose Gutierrez Abascal 2, 28006 Madrid (Spain); Mendez V, R.; Los Arcos M, J. M.; Guerrero A, J. E., E-mail: morvymm@yahoo.com.m [CIEMAT, Laboratorio de Metrologia de Radiaciones Ionizantes, Avda. Complutense 22, 28040 Madrid (Spain)

    2011-02-15

    With the Bonner spheres spectrometer neutron spectrum is obtained through an unfolding procedure. Monte Carlo methods, Regularization, Parametrization, Least-squares, and Maximum Entropy are some of the techniques utilized for unfolding. In the last decade methods based on Artificial Intelligence Technology have been used. Approaches based on Genetic Algorithms and Artificial Neural Networks (Ann) have been developed in order to overcome the drawbacks of previous techniques. Nevertheless the advantages of Ann still it has some drawbacks mainly in the design process of the network, vg the optimum selection of the architectural and learning Ann parameters. In recent years the use of hybrid technologies, combining Ann and genetic algorithms, has been utilized to. In this work, several Ann topologies were trained and tested using Ann and Genetically Evolved Artificial Neural Networks in the aim to unfold neutron spectra using the count rates of a Bonner sphere spectrometer. Here, a comparative study of both procedures has been carried out. (Author)

  10. Semantic segmentation of bioimages using convolutional neural networks

    CSIR Research Space (South Africa)

    Wiehman, S

    2016-07-01

    Full Text Available Convolutional neural networks have shown great promise in both general image segmentation problems as well as bioimage segmentation. In this paper, the application of different convolutional network architectures is explored on the C. elegans live...

  11. Artificial neural networks with an infinite number of nodes

    Science.gov (United States)

    Blekas, K.; Lagaris, I. E.

    2017-10-01

    A new class of Artificial Neural Networks is described incorporating a node density function and functional weights. This network containing an infinite number of nodes, excels in generalizing and possesses a superior extrapolation capability.

  12. Shaping embodied neural networks for adaptive goal-directed behavior.

    Directory of Open Access Journals (Sweden)

    Zenas C Chao

    2008-03-01

    Full Text Available The acts of learning and memory are thought to emerge from the modifications of synaptic connections between neurons, as guided by sensory feedback during behavior. However, much is unknown about how such synaptic processes can sculpt and are sculpted by neuronal population dynamics and an interaction with the environment. Here, we embodied a simulated network, inspired by dissociated cortical neuronal cultures, with an artificial animal (an animat through a sensory-motor loop consisting of structured stimuli, detailed activity metrics incorporating spatial information, and an adaptive training algorithm that takes advantage of spike timing dependent plasticity. By using our design, we demonstrated that the network was capable of learning associations between multiple sensory inputs and motor outputs, and the animat was able to adapt to a new sensory mapping to restore its goal behavior: move toward and stay within a user-defined area. We further showed that successful learning required proper selections of stimuli to encode sensory inputs and a variety of training stimuli with adaptive selection contingent on the animat's behavior. We also found that an individual network had the flexibility to achieve different multi-task goals, and the same goal behavior could be exhibited with different sets of network synaptic strengths. While lacking the characteristic layered structure of in vivo cortical tissue, the biologically inspired simulated networks could tune their activity in behaviorally relevant manners, demonstrating that leaky integrate-and-fire neural networks have an innate ability to process information. This closed-loop hybrid system is a useful tool to study the network properties intermediating synaptic plasticity and behavioral adaptation. The training algorithm provides a stepping stone towards designing future control systems, whether with artificial neural networks or biological animats themselves.

  13. Nonlinear neural network for hemodynamic model state and input estimation using fMRI data

    KAUST Repository

    Karam, Ayman M.

    2014-11-01

    Originally inspired by biological neural networks, artificial neural networks (ANNs) are powerful mathematical tools that can solve complex nonlinear problems such as filtering, classification, prediction and more. This paper demonstrates the first successful implementation of ANN, specifically nonlinear autoregressive with exogenous input (NARX) networks, to estimate the hemodynamic states and neural activity from simulated and measured real blood oxygenation level dependent (BOLD) signals. Blocked and event-related BOLD data are used to test the algorithm on real experiments. The proposed method is accurate and robust even in the presence of signal noise and it does not depend on sampling interval. Moreover, the structure of the NARX networks is optimized to yield the best estimate with minimal network architecture. The results of the estimated neural activity are also discussed in terms of their potential use.

  14. Adaptive training of feedforward neural networks by Kalman filtering

    Energy Technology Data Exchange (ETDEWEB)

    Ciftcioglu, Oe. [Istanbul Technical Univ. (Turkey). Dept. of Electrical Engineering; Tuerkcan, E. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands)

    1995-02-01

    Adaptive training of feedforward neural networks by Kalman filtering is described. Adaptive training is particularly important in estimation by neural network in real-time environmental where the trained network is used for system estimation while the network is further trained by means of the information provided by the experienced/exercised ongoing operation. As result of this, neural network adapts itself to a changing environment to perform its mission without recourse to re-training. The performance of the training method is demonstrated by means of actual process signals from a nuclear power plant. (orig.).

  15. Modelling personal exposure to particulate air pollution: an assessment of time-integrated activity modelling, Monte Carlo simulation & artificial neural network approaches.

    Science.gov (United States)

    McCreddin, A; Alam, M S; McNabola, A

    2015-01-01

    An experimental assessment of personal exposure to PM10 in 59 office workers was carried out in Dublin, Ireland. 255 samples of 24-h personal exposure were collected in real time over a 28 month period. A series of modelling techniques were subsequently assessed for their ability to predict 24-h personal exposure to PM10. Artificial neural network modelling, Monte Carlo simulation and time-activity based models were developed and compared. The results of the investigation showed that using the Monte Carlo technique to randomly select concentrations from statistical distributions of exposure concentrations in typical microenvironments encountered by office workers produced the most accurate results, based on 3 statistical measures of model performance. The Monte Carlo simulation technique was also shown to have the greatest potential utility over the other techniques, in terms of predicting personal exposure without the need for further monitoring data. Over the 28 month period only a very weak correlation was found between background air quality and personal exposure measurements, highlighting the need for accurate models of personal exposure in epidemiological studies. Copyright © 2014 Elsevier GmbH. All rights reserved.

  16. Modeling the reflection of Photosynthetically active radiation in a monodominant floodable forest in the Pantanal of Mato Grosso State using multivariate statistics and neural networks

    Directory of Open Access Journals (Sweden)

    LEONE F.A. CURADO

    2016-01-01

    Full Text Available ABSTRACT The study of radiation entrance and exit dynamics and energy consumption in a system is important for understanding the environmental processes that rule the biosphere-atmosphere interactions of all ecosystems. This study provides an analysis of the interaction of energy in the form of photosynthetically active radiation (PAR in the Pantanal, a Brazilian wetland forest, by studying the variation of PAR reflectance and its interaction with local rainfall. The study site is located in Private Reserve of Natural Heritage, Mato Grosso State, Brazil, where the vegetation is a monodominant forest of Vochysia divergens Phol. The results showed a high correlation between the reflection of visible radiation and rainfall; however, the behavior was not the same at the three heights studied. An analysis of the hourly variation of the reflected waves also showed the seasonality of these phenomena in relation to the dry and rainy seasons. A predictive model for PAR was developed with a neural network that has a hidden layer, and it showed a determination coefficient of 0.938. This model showed that the Julian day and time of measurements had an inverse association with the wind profile and a direct association with the relative humidity profile.

  17. Modeling the reflection of Photosynthetically active radiation in a monodominant floodable forest in the Pantanal of Mato Grosso State using multivariate statistics and neural networks.

    Science.gov (United States)

    Curado, Leone F A; Musis, Carlo R DE; Cunha, Cristiano R DA; Rodrigues, Thiago R; Pereira, Vinicius M R; Nogueira, José S; Sanches, Luciana

    2016-09-01

    The study of radiation entrance and exit dynamics and energy consumption in a system is important for understanding the environmental processes that rule the biosphere-atmosphere interactions of all ecosystems. This study provides an analysis of the interaction of energy in the form of photosynthetically active radiation (PAR) in the Pantanal, a Brazilian wetland forest, by studying the variation of PAR reflectance and its interaction with local rainfall. The study site is located in Private Reserve of Natural Heritage, Mato Grosso State, Brazil, where the vegetation is a monodominant forest of Vochysia divergens Phol. The results showed a high correlation between the reflection of visible radiation and rainfall; however, the behavior was not the same at the three heights studied. An analysis of the hourly variation of the reflected waves also showed the seasonality of these phenomena in relation to the dry and rainy seasons. A predictive model for PAR was developed with a neural network that has a hidden layer, and it showed a determination coefficient of 0.938. This model showed that the Julian day and time of measurements had an inverse association with the wind profile and a direct association with the relative humidity profile.

  18. Automated Modeling of Microwave Structures by Enhanced Neural Networks

    Directory of Open Access Journals (Sweden)

    Z. Raida

    2006-12-01

    Full Text Available The paper describes the methodology of the automated creation of neural models of microwave structures. During the creation process, artificial neural networks are trained using the combination of the particle swarm optimization and the quasi-Newton method to avoid critical training problems of the conventional neural nets. In the paper, neural networks are used to approximate the behavior of a planar microwave filter (moment method, Zeland IE3D. In order to evaluate the efficiency of neural modeling, global optimizations are performed using numerical models and neural ones. Both approaches are compared from the viewpoint of CPU-time demands and the accuracy. Considering conclusions, methodological recommendations for including neural networks to the microwave design are formulated.

  19. Quantum Entanglement in Neural Network States

    Science.gov (United States)

    Deng, Dong-Ling; Li, Xiaopeng; Das Sarma, S.

    2017-04-01

    Machine learning, one of today's most rapidly growing interdisciplinary fields, promises an unprecedented perspective for solving intricate quantum many-body problems. Understanding the physical aspects of the representative artificial neural-network states has recently become highly desirable in the applications of machine-learning techniques to quantum many-body physics. In this paper, we explore the data structures that encode the physical features in the network states by studying the quantum entanglement properties, with a focus on the restricted-Boltzmann-machine (RBM) architecture. We prove that the entanglement entropy of all short-range RBM states satisfies an area law for arbitrary dimensions and bipartition geometry. For long-range RBM states, we show by using an exact construction that such states could exhibit volume-law entanglement, implying a notable capability of RBM in representing quantum states with massive entanglement. Strikingly, the neural-network representation for these states is remarkably efficient, in the sense that the number of nonzero parameters scales only linearly with the system size. We further examine the entanglement properties of generic RBM states by randomly sampling the weight parameters of the RBM. We find that their averaged entanglement entropy obeys volume-law scaling, and the meantime strongly deviates from the Page entropy of the completely random pure states. We show that their entanglement spectrum has no universal part associated with random matrix theory and bears a Poisson-type level statistics. Using reinforcement learning, we demonstrate that RBM is capable of finding the ground state (with power-law entanglement) of a model Hamiltonian with a long-range interaction. In addition, we show, through a concrete example of the one-dimensional symmetry-protected topological cluster states, that the RBM representation may also be used as a tool to analytically compute the entanglement spectrum. Our results uncover the

  20. Quantum Entanglement in Neural Network States

    Directory of Open Access Journals (Sweden)

    Dong-Ling Deng

    2017-05-01

    Full Text Available Machine learning, one of today’s most rapidly growing interdisciplinary fields, promises an unprecedented perspective for solving intricate quantum many-body problems. Understanding the physical aspects of the representative artificial neural-network states has recently become highly desirable in the applications of machine-learning techniques to quantum many-body physics. In this paper, we explore the data structures that encode the physical features in the network states by studying the quantum entanglement properties, with a focus on the restricted-Boltzmann-machine (RBM architecture. We prove that the entanglement entropy of all short-range RBM states satisfies an area law for arbitrary dimensions and bipartition geometry. For long-range RBM states, we show by using an exact construction that such states could exhibit volume-law entanglement, implying a notable capability of RBM in representing quantum states with massive entanglement. Strikingly, the neural-network representation for these states is remarkably efficient, in the sense that the number of nonzero parameters scales only linearly with the system size. We further examine the entanglement properties of generic RBM states by randomly sampling the weight parameters of the RBM. We find that their averaged entanglement entropy obeys volume-law scaling, and the meantime strongly deviates from the Page entropy of the completely random pure states. We show that their entanglement spectrum has no universal part associated with random matrix theory and bears a Poisson-type level statistics. Using reinforcement learning, we demonstrate that RBM is capable of finding the ground state (with power-law entanglement of a model Hamiltonian with a long-range interaction. In addition, we show, through a concrete example of the one-dimensional symmetry-protected topological cluster states, that the RBM representation may also be used as a tool to analytically compute the entanglement spectrum. Our