WorldWideScience

Sample records for two-hidden layer neural

  1. Sequential neural models with stochastic layers

    DEFF Research Database (Denmark)

    Fraccaro, Marco; Sønderby, Søren Kaae; Paquet, Ulrich

    2016-01-01

    How can we efficiently propagate uncertainty in a latent state representation with recurrent neural networks? This paper introduces stochastic recurrent neural networks which glue a deterministic recurrent neural network and a state space model together to form a stochastic and sequential neural...... generative model. The clear separation of deterministic and stochastic layers allows a structured variational inference network to track the factorization of the model's posterior distribution. By retaining both the nonlinear recursive structure of a recurrent neural network and averaging over...

  2. Modular representation of layered neural networks.

    Science.gov (United States)

    Watanabe, Chihiro; Hiramatsu, Kaoru; Kashino, Kunio

    2018-01-01

    Layered neural networks have greatly improved the performance of various applications including image processing, speech recognition, natural language processing, and bioinformatics. However, it is still difficult to discover or interpret knowledge from the inference provided by a layered neural network, since its internal representation has many nonlinear and complex parameters embedded in hierarchical layers. Therefore, it becomes important to establish a new methodology by which layered neural networks can be understood. In this paper, we propose a new method for extracting a global and simplified structure from a layered neural network. Based on network analysis, the proposed method detects communities or clusters of units with similar connection patterns. We show its effectiveness by applying it to three use cases. (1) Network decomposition: it can decompose a trained neural network into multiple small independent networks thus dividing the problem and reducing the computation time. (2) Training assessment: the appropriateness of a trained result with a given hyperparameter or randomly chosen initial parameters can be evaluated by using a modularity index. And (3) data analysis: in practical data it reveals the community structure in the input, hidden, and output layers, which serves as a clue for discovering knowledge from a trained neural network. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Implementation of multi-layer feed forward neural network on PIC16F877 microcontroller

    International Nuclear Information System (INIS)

    Nur Aira Abd Rahman

    2005-01-01

    Artificial Neural Network (ANN) is an electronic model based on the neural structure of the brain. Similar to human brain, ANN consists of interconnected simple processing units or neurons that process input to generate output signals. ANN operation is divided into 2 categories; training mode and service mode. This project aims to implement ANN on PIC micro-controller that enable on-chip or stand alone training and service mode. The input can varies from sensors or switches, while the output can be used to control valves, motors, light source and a lot more. As partial development of the project, this paper reports the current status and results of the implemented ANN. The hardware fraction of this project incorporates Microchip PIC16F877A microcontrollers along with uM-FPU math co-processor. uM-FPU is a 32-bit floating point co-processor utilized to execute complex calculation requires by the sigmoid activation function for neuron. ANN algorithm is converted to software program written in assembly language. The implemented ANN structure is three layer with one hidden layer, and five neurons with two hidden neurons. To prove the operability and functionality, the network is trained to solve three common logic gate operations; AND, OR, and XOR. This paper concludes that the ANN had been successfully implemented on PIC16F877a and uM-FPU math co-processor hardware that works accordingly on both training and service mode. (Author)

  4. Failure detection studies by layered neural network

    International Nuclear Information System (INIS)

    Ciftcioglu, O.; Seker, S.; Turkcan, E.

    1991-06-01

    Failure detection studies by layered neural network (NN) are described. The particular application area is an operating nuclear power plant and the failure detection is of concern as result of system surveillance in real-time. The NN system is considered to be consisting of 3 layers, one of which being hidden, and the NN parameters are determined adaptively by the backpropagation (BP) method, the process being the training phase. Studies are performed using the power spectra of the pressure signal of the primary system of an operating nuclear power plant of PWR type. The studies revealed that, by means of NN approach, failure detection can effectively be carried out using the redundant information as well as this is the case in this work; namely, from measurement of the primary pressure signals one can estimate the primary system coolant temperature and hence the deviation from the operational temperature state, the operational status identified in the training phase being referred to as normal. (author). 13 refs.; 4 figs.; 2 tabs

  5. Learning of N-layers neural network

    Directory of Open Access Journals (Sweden)

    Vladimír Konečný

    2005-01-01

    Full Text Available In the last decade we can observe increasing number of applications based on the Artificial Intelligence that are designed to solve problems from different areas of human activity. The reason why there is so much interest in these technologies is that the classical way of solutions does not exist or these technologies are not suitable because of their robustness. They are often used in applications like Business Intelligence that enable to obtain useful information for high-quality decision-making and to increase competitive advantage.One of the most widespread tools for the Artificial Intelligence are the artificial neural networks. Their high advantage is relative simplicity and the possibility of self-learning based on set of pattern situations.For the learning phase is the most commonly used algorithm back-propagation error (BPE. The base of BPE is the method minima of error function representing the sum of squared errors on outputs of neural net, for all patterns of the learning set. However, while performing BPE and in the first usage, we can find out that it is necessary to complete the handling of the learning factor by suitable method. The stability of the learning process and the rate of convergence depend on the selected method. In the article there are derived two functions: one function for the learning process management by the relative great error function value and the second function when the value of error function approximates to global minimum.The aim of the article is to introduce the BPE algorithm in compact matrix form for multilayer neural networks, the derivation of the learning factor handling method and the presentation of the results.

  6. Kernel Function Tuning for Single-Layer Neural Networks

    Czech Academy of Sciences Publication Activity Database

    Vidnerová, Petra; Neruda, Roman

    -, accepted 28.11. 2017 (2018) ISSN 2278-0149 R&D Projects: GA ČR GA15-18108S Institutional support: RVO:67985807 Keywords : single-layer neural networks * kernel methods * kernel function * optimisation Subject RIV: IN - Informatics, Computer Science http://www.ijmerr.com/

  7. Typology of nonlinear activity waves in a layered neural continuum.

    Science.gov (United States)

    Koch, Paul; Leisman, Gerry

    2006-04-01

    Neural tissue, a medium containing electro-chemical energy, can amplify small increments in cellular activity. The growing disturbance, measured as the fraction of active cells, manifests as propagating waves. In a layered geometry with a time delay in synaptic signals between the layers, the delay is instrumental in determining the amplified wavelengths. The growth of the waves is limited by the finite number of neural cells in a given region of the continuum. As wave growth saturates, the resulting activity patterns in space and time show a variety of forms, ranging from regular monochromatic waves to highly irregular mixtures of different spatial frequencies. The type of wave configuration is determined by a number of parameters, including alertness and synaptic conditioning as well as delay. For all cases studied, using numerical solution of the nonlinear Wilson-Cowan (1973) equations, there is an interval in delay in which the wave mixing occurs. As delay increases through this interval, during a series of consecutive waves propagating through a continuum region, the activity within that region changes from a single-frequency to a multiple-frequency pattern and back again. The diverse spatio-temporal patterns give a more concrete form to several metaphors advanced over the years to attempt an explanation of cognitive phenomena: Activity waves embody the "holographic memory" (Pribram, 1991); wave mixing provides a plausible cause of the competition called "neural Darwinism" (Edelman, 1988); finally the consecutive generation of growing neural waves can explain the discontinuousness of "psychological time" (Stroud, 1955).

  8. Two-Layer Feedback Neural Networks with Associative Memories

    International Nuclear Information System (INIS)

    Gui-Kun, Wu; Hong, Zhao

    2008-01-01

    We construct a two-layer feedback neural network by a Monte Carlo based algorithm to store memories as fixed-point attractors or as limit-cycle attractors. Special attention is focused on comparing the dynamics of the network with limit-cycle attractors and with fixed-point attractors. It is found that the former has better retrieval property than the latter. Particularly, spurious memories may be suppressed completely when the memories are stored as a long-limit cycle. Potential application of limit-cycle-attractor networks is discussed briefly. (general)

  9. Usage of neural network to predict aluminium oxide layer thickness.

    Science.gov (United States)

    Michal, Peter; Vagaská, Alena; Gombár, Miroslav; Kmec, Ján; Spišák, Emil; Kučerka, Daniel

    2015-01-01

    This paper shows an influence of chemical composition of used electrolyte, such as amount of sulphuric acid in electrolyte, amount of aluminium cations in electrolyte and amount of oxalic acid in electrolyte, and operating parameters of process of anodic oxidation of aluminium such as the temperature of electrolyte, anodizing time, and voltage applied during anodizing process. The paper shows the influence of those parameters on the resulting thickness of aluminium oxide layer. The impact of these variables is shown by using central composite design of experiment for six factors (amount of sulphuric acid, amount of oxalic acid, amount of aluminium cations, electrolyte temperature, anodizing time, and applied voltage) and by usage of the cubic neural unit with Levenberg-Marquardt algorithm during the results evaluation. The paper also deals with current densities of 1 A · dm(-2) and 3 A · dm(-2) for creating aluminium oxide layer.

  10. Usage of Neural Network to Predict Aluminium Oxide Layer Thickness

    Directory of Open Access Journals (Sweden)

    Peter Michal

    2015-01-01

    Full Text Available This paper shows an influence of chemical composition of used electrolyte, such as amount of sulphuric acid in electrolyte, amount of aluminium cations in electrolyte and amount of oxalic acid in electrolyte, and operating parameters of process of anodic oxidation of aluminium such as the temperature of electrolyte, anodizing time, and voltage applied during anodizing process. The paper shows the influence of those parameters on the resulting thickness of aluminium oxide layer. The impact of these variables is shown by using central composite design of experiment for six factors (amount of sulphuric acid, amount of oxalic acid, amount of aluminium cations, electrolyte temperature, anodizing time, and applied voltage and by usage of the cubic neural unit with Levenberg-Marquardt algorithm during the results evaluation. The paper also deals with current densities of 1 A·dm−2 and 3 A·dm−2 for creating aluminium oxide layer.

  11. Antibacterial, anti-inflammatory and neuroprotective layer-by-layer coatings for neural implants

    Science.gov (United States)

    Zhang, Zhiling; Nong, Jia; Zhong, Yinghui

    2015-08-01

    Objective. Infection, inflammation, and neuronal loss are common issues that seriously affect the functionality and longevity of chronically implanted neural prostheses. Minocycline hydrochloride (MH) is a broad-spectrum antibiotic and effective anti-inflammatory drug that also exhibits potent neuroprotective activities. In this study, we investigated the development of biocompatible thin film coatings capable of sustained release of MH for improving the long term performance of implanted neural electrodes. Approach. We developed a novel magnesium binding-mediated drug delivery mechanism for controlled and sustained release of MH from an ultrathin hydrophilic layer-by-layer (LbL) coating and characterized the parameters that control MH loading and release. The anti-biofilm, anti-inflammatory and neuroprotective potencies of the LbL coating and released MH were also examined. Main results. Sustained release of physiologically relevant amount of MH for 46 days was achieved from the Mg2+-based LbL coating at a thickness of 1.25 μm. In addition, MH release from the LbL coating is pH-sensitive. The coating and released MH demonstrated strong anti-biofilm, anti-inflammatory, and neuroprotective potencies. Significance. This study reports, for the first time, the development of a bioactive coating that can target infection, inflammation, and neuroprotection simultaneously, which may facilitate the translation of neural interfaces to clinical applications.

  12. Separation prediction in two dimensional boundary layer flows using artificial neural networks

    International Nuclear Information System (INIS)

    Sabetghadam, F.; Ghomi, H.A.

    2003-01-01

    In this article, the ability of artificial neural networks in prediction of separation in steady two dimensional boundary layer flows is studied. Data for network training is extracted from numerical solution of an ODE obtained from Von Karman integral equation with approximate one parameter Pohlhousen velocity profile. As an appropriate neural network, a two layer radial basis generalized regression artificial neural network is used. The results shows good agreements between the overall behavior of the flow fields predicted by the artificial neural network and the actual flow fields for some cases. The method easily can be extended to unsteady separation and turbulent as well as compressible boundary layer flows. (author)

  13. Learning text representation using recurrent convolutional neural network with highway layers

    OpenAIRE

    Wen, Ying; Zhang, Weinan; Luo, Rui; Wang, Jun

    2016-01-01

    Recently, the rapid development of word embedding and neural networks has brought new inspiration to various NLP and IR tasks. In this paper, we describe a staged hybrid model combining Recurrent Convolutional Neural Networks (RCNN) with highway layers. The highway network module is incorporated in the middle takes the output of the bi-directional Recurrent Neural Network (Bi-RNN) module in the first stage and provides the Convolutional Neural Network (CNN) module in the last stage with the i...

  14. A One-Layer Recurrent Neural Network for Constrained Complex-Variable Convex Optimization.

    Science.gov (United States)

    Qin, Sitian; Feng, Jiqiang; Song, Jiahui; Wen, Xingnan; Xu, Chen

    2018-03-01

    In this paper, based on calculus and penalty method, a one-layer recurrent neural network is proposed for solving constrained complex-variable convex optimization. It is proved that for any initial point from a given domain, the state of the proposed neural network reaches the feasible region in finite time and converges to an optimal solution of the constrained complex-variable convex optimization finally. In contrast to existing neural networks for complex-variable convex optimization, the proposed neural network has a lower model complexity and better convergence. Some numerical examples and application are presented to substantiate the effectiveness of the proposed neural network.

  15. Gradual DropIn of Layers to Train Very Deep Neural Networks

    OpenAIRE

    Smith, Leslie N.; Hand, Emily M.; Doster, Timothy

    2015-01-01

    We introduce the concept of dynamically growing a neural network during training. In particular, an untrainable deep network starts as a trainable shallow network and newly added layers are slowly, organically added during training, thereby increasing the network's depth. This is accomplished by a new layer, which we call DropIn. The DropIn layer starts by passing the output from a previous layer (effectively skipping over the newly added layers), then increasingly including units from the ne...

  16. Optimizing the Flexural Strength of Beams Reinforced with Fiber Reinforced Polymer Bars Using Back-Propagation Neural Networks

    Directory of Open Access Journals (Sweden)

    Bahman O. Taha

    2015-06-01

    Full Text Available The reinforced concrete with fiber reinforced polymer (FRP bars (carbon, aramid, basalt and glass is used in places where a high ratio of strength to weight is required and corrosion is not acceptable. Behavior of structural members using (FRP bars is hard to be modeled using traditional methods because of the high non-linearity relationship among factors influencing the strength of structural members. Back-propagation neural network is a very effective method for modeling such complicated relationships. In this paper, back-propagation neural network is used for modeling the flexural behavior of beams reinforced with (FRP bars. 101 samples of beams reinforced with fiber bars were collected from literatures. Five important factors are taken in consideration for predicting the strength of beams. Two models of Multilayer Perceptron (MLP are created, first with single-hidden layer and the second with two-hidden layers. The two-hidden layer model showed better accuracy ratio than the single-hidden layer model. Parametric study has been done for two-hidden layer model only. Equations are derived to be used instead of the model and the importance of input factors is determined. Results showed that the neural network is successful in modeling the behavior of concrete beams reinforced with different types of (FRP bars.

  17. The Multi-Layered Perceptrons Neural Networks for the Prediction of Daily Solar Radiation

    OpenAIRE

    Radouane Iqdour; Abdelouhab Zeroual

    2007-01-01

    The Multi-Layered Perceptron (MLP) Neural networks have been very successful in a number of signal processing applications. In this work we have studied the possibilities and the met difficulties in the application of the MLP neural networks for the prediction of daily solar radiation data. We have used the Polack-Ribière algorithm for training the neural networks. A comparison, in term of the statistical indicators, with a linear model most used in literature, is also perfo...

  18. On the approximation by single hidden layer feedforward neural networks with fixed weights

    OpenAIRE

    Guliyev, Namig J.; Ismailov, Vugar E.

    2017-01-01

    International audience; Feedforward neural networks have wide applicability in various disciplines of science due to their universal approximation property. Some authors have shown that single hidden layer feedforward neural networks (SLFNs) with fixed weights still possess the universal approximation property provided that approximated functions are univariate. But this phenomenon does not lay any restrictions on the number of neurons in the hidden layer. The more this number, the more the p...

  19. One-dimensional model of cable-in-conduit superconductors under cyclic loading using artificial neural networks

    International Nuclear Information System (INIS)

    Lefik, M.; Schrefler, B.A.

    2002-01-01

    An artificial neural network with two hidden layers is trained to define a mechanical constitutive relation for superconducting cable under transverse cyclic loading. The training is performed using a set of experimental data. The behaviour of the cable is strongly non-linear. Irreversible phenomena result with complicated loops of hysteresis. The performance of the ANN, which is applied as a tool for storage, interpolation and interpretation of experimental data is investigated, both from numerical, as well as from physical viewpoints

  20. A stochastic learning algorithm for layered neural networks

    International Nuclear Information System (INIS)

    Bartlett, E.B.; Uhrig, R.E.

    1992-01-01

    The random optimization method typically uses a Gaussian probability density function (PDF) to generate a random search vector. In this paper the random search technique is applied to the neural network training problem and is modified to dynamically seek out the optimal probability density function (OPDF) from which to select the search vector. The dynamic OPDF search process, combined with an auto-adaptive stratified sampling technique and a dynamic node architecture (DNA) learning scheme, completes the modifications of the basic method. The DNA technique determines the appropriate number of hidden nodes needed for a given training problem. By using DNA, researchers do not have to set the neural network architectures before training is initiated. The approach is applied to networks of generalized, fully interconnected, continuous perceptions. Computer simulation results are given

  1. A One-Layer Recurrent Neural Network for Real-Time Portfolio Optimization With Probability Criterion.

    Science.gov (United States)

    Liu, Qingshan; Dang, Chuangyin; Huang, Tingwen

    2013-02-01

    This paper presents a decision-making model described by a recurrent neural network for dynamic portfolio optimization. The portfolio-optimization problem is first converted into a constrained fractional programming problem. Since the objective function in the programming problem is not convex, the traditional optimization techniques are no longer applicable for solving this problem. Fortunately, the objective function in the fractional programming is pseudoconvex on the feasible region. It leads to a one-layer recurrent neural network modeled by means of a discontinuous dynamic system. To ensure the optimal solutions for portfolio optimization, the convergence of the proposed neural network is analyzed and proved. In fact, the neural network guarantees to get the optimal solutions for portfolio-investment advice if some mild conditions are satisfied. A numerical example with simulation results substantiates the effectiveness and illustrates the characteristics of the proposed neural network.

  2. A one-layer recurrent neural network for constrained nonsmooth optimization.

    Science.gov (United States)

    Liu, Qingshan; Wang, Jun

    2011-10-01

    This paper presents a novel one-layer recurrent neural network modeled by means of a differential inclusion for solving nonsmooth optimization problems, in which the number of neurons in the proposed neural network is the same as the number of decision variables of optimization problems. Compared with existing neural networks for nonsmooth optimization problems, the global convexity condition on the objective functions and constraints is relaxed, which allows the objective functions and constraints to be nonconvex. It is proven that the state variables of the proposed neural network are convergent to optimal solutions if a single design parameter in the model is larger than a derived lower bound. Numerical examples with simulation results substantiate the effectiveness and illustrate the characteristics of the proposed neural network.

  3. A one-layer recurrent neural network for constrained nonconvex optimization.

    Science.gov (United States)

    Li, Guocheng; Yan, Zheng; Wang, Jun

    2015-01-01

    In this paper, a one-layer recurrent neural network is proposed for solving nonconvex optimization problems subject to general inequality constraints, designed based on an exact penalty function method. It is proved herein that any neuron state of the proposed neural network is convergent to the feasible region in finite time and stays there thereafter, provided that the penalty parameter is sufficiently large. The lower bounds of the penalty parameter and convergence time are also estimated. In addition, any neural state of the proposed neural network is convergent to its equilibrium point set which satisfies the Karush-Kuhn-Tucker conditions of the optimization problem. Moreover, the equilibrium point set is equivalent to the optimal solution to the nonconvex optimization problem if the objective function and constraints satisfy given conditions. Four numerical examples are provided to illustrate the performances of the proposed neural network.

  4. A one-layer recurrent neural network for constrained nonsmooth invex optimization.

    Science.gov (United States)

    Li, Guocheng; Yan, Zheng; Wang, Jun

    2014-02-01

    Invexity is an important notion in nonconvex optimization. In this paper, a one-layer recurrent neural network is proposed for solving constrained nonsmooth invex optimization problems, designed based on an exact penalty function method. It is proved herein that any state of the proposed neural network is globally convergent to the optimal solution set of constrained invex optimization problems, with a sufficiently large penalty parameter. In addition, any neural state is globally convergent to the unique optimal solution, provided that the objective function and constraint functions are pseudoconvex. Moreover, any neural state is globally convergent to the feasible region in finite time and stays there thereafter. The lower bounds of the penalty parameter and convergence time are also estimated. Two numerical examples are provided to illustrate the performances of the proposed neural network. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. A new backpropagation learning algorithm for layered neural networks with nondifferentiable units.

    Science.gov (United States)

    Oohori, Takahumi; Naganuma, Hidenori; Watanabe, Kazuhisa

    2007-05-01

    We propose a digital version of the backpropagation algorithm (DBP) for three-layered neural networks with nondifferentiable binary units. This approach feeds teacher signals to both the middle and output layers, whereas with a simple perceptron, they are given only to the output layer. The additional teacher signals enable the DBP to update the coupling weights not only between the middle and output layers but also between the input and middle layers. A neural network based on DBP learning is fast and easy to implement in hardware. Simulation results for several linearly nonseparable problems such as XOR demonstrate that the DBP performs favorably when compared to the conventional approaches. Furthermore, in large-scale networks, simulation results indicate that the DBP provides high performance.

  6. Germ layers, the neural crest and emergent organization in development and evolution.

    Science.gov (United States)

    Hall, Brian K

    2018-04-10

    Discovered in chick embryos by Wilhelm His in 1868 and named the neural crest by Arthur Milnes Marshall in 1879, the neural crest cells that arise from the neural folds have since been shown to differentiate into almost two dozen vertebrate cell types and to have played major roles in the evolution of such vertebrate features as bone, jaws, teeth, visceral (pharyngeal) arches, and sense organs. I discuss the discovery that ectodermal neural crest gave rise to mesenchyme and the controversy generated by that finding; the germ layer theory maintained that only mesoderm could give rise to mesenchyme. A second topic of discussion is germ layers (including the neural crest) as emergent levels of organization in animal development and evolution that facilitated major developmental and evolutionary change. The third topic is gene networks, gene co-option, and the evolution of gene-signaling pathways as key to developmental and evolutionary transitions associated with the origin and evolution of the neural crest and neural crest cells. © 2018 Wiley Periodicals, Inc.

  7. A two-layer recurrent neural network for nonsmooth convex optimization problems.

    Science.gov (United States)

    Qin, Sitian; Xue, Xiaoping

    2015-06-01

    In this paper, a two-layer recurrent neural network is proposed to solve the nonsmooth convex optimization problem subject to convex inequality and linear equality constraints. Compared with existing neural network models, the proposed neural network has a low model complexity and avoids penalty parameters. It is proved that from any initial point, the state of the proposed neural network reaches the equality feasible region in finite time and stays there thereafter. Moreover, the state is unique if the initial point lies in the equality feasible region. The equilibrium point set of the proposed neural network is proved to be equivalent to the Karush-Kuhn-Tucker optimality set of the original optimization problem. It is further proved that the equilibrium point of the proposed neural network is stable in the sense of Lyapunov. Moreover, from any initial point, the state is proved to be convergent to an equilibrium point of the proposed neural network. Finally, as applications, the proposed neural network is used to solve nonlinear convex programming with linear constraints and L1 -norm minimization problems.

  8. Single-hidden-layer feed-forward quantum neural network based on Grover learning.

    Science.gov (United States)

    Liu, Cheng-Yi; Chen, Chein; Chang, Ching-Ter; Shih, Lun-Min

    2013-09-01

    In this paper, a novel single-hidden-layer feed-forward quantum neural network model is proposed based on some concepts and principles in the quantum theory. By combining the quantum mechanism with the feed-forward neural network, we defined quantum hidden neurons and connected quantum weights, and used them as the fundamental information processing unit in a single-hidden-layer feed-forward neural network. The quantum neurons make a wide range of nonlinear functions serve as the activation functions in the hidden layer of the network, and the Grover searching algorithm outstands the optimal parameter setting iteratively and thus makes very efficient neural network learning possible. The quantum neuron and weights, along with a Grover searching algorithm based learning, result in a novel and efficient neural network characteristic of reduced network, high efficient training and prospect application in future. Some simulations are taken to investigate the performance of the proposed quantum network and the result show that it can achieve accurate learning. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. A neural network model for credit risk evaluation.

    Science.gov (United States)

    Khashman, Adnan

    2009-08-01

    Credit scoring is one of the key analytical techniques in credit risk evaluation which has been an active research area in financial risk management. This paper presents a credit risk evaluation system that uses a neural network model based on the back propagation learning algorithm. We train and implement the neural network to decide whether to approve or reject a credit application, using seven learning schemes and real world credit applications from the Australian credit approval datasets. A comparison of the system performance under the different learning schemes is provided, furthermore, we compare the performance of two neural networks; with one and two hidden layers following the ideal learning scheme. Experimental results suggest that neural networks can be effectively used in automatic processing of credit applications.

  10. Learning behavior and temporary minima of two-layer neural networks

    NARCIS (Netherlands)

    Annema, Anne J.; Hoen, Klaas; Hoen, Klaas; Wallinga, Hans

    1994-01-01

    This paper presents a mathematical analysis of the occurrence of temporary minima during training of a single-output, two-layer neural network, with learning according to the back-propagation algorithm. A new vector decomposition method is introduced, which simplifies the mathematical analysis of

  11. A One-Layer Recurrent Neural Network for Pseudoconvex Optimization Problems With Equality and Inequality Constraints.

    Science.gov (United States)

    Qin, Sitian; Yang, Xiudong; Xue, Xiaoping; Song, Jiahui

    2017-10-01

    Pseudoconvex optimization problem, as an important nonconvex optimization problem, plays an important role in scientific and engineering applications. In this paper, a recurrent one-layer neural network is proposed for solving the pseudoconvex optimization problem with equality and inequality constraints. It is proved that from any initial state, the state of the proposed neural network reaches the feasible region in finite time and stays there thereafter. It is also proved that the state of the proposed neural network is convergent to an optimal solution of the related problem. Compared with the related existing recurrent neural networks for the pseudoconvex optimization problems, the proposed neural network in this paper does not need the penalty parameters and has a better convergence. Meanwhile, the proposed neural network is used to solve three nonsmooth optimization problems, and we make some detailed comparisons with the known related conclusions. In the end, some numerical examples are provided to illustrate the effectiveness of the performance of the proposed neural network.

  12. Synchronization and Inter-Layer Interactions of Noise-Driven Neural Networks.

    Science.gov (United States)

    Yuniati, Anis; Mai, Te-Lun; Chen, Chi-Ming

    2017-01-01

    In this study, we used the Hodgkin-Huxley (HH) model of neurons to investigate the phase diagram of a developing single-layer neural network and that of a network consisting of two weakly coupled neural layers. These networks are noise driven and learn through the spike-timing-dependent plasticity (STDP) or the inverse STDP rules. We described how these networks transited from a non-synchronous background activity state (BAS) to a synchronous firing state (SFS) by varying the network connectivity and the learning efficacy. In particular, we studied the interaction between a SFS layer and a BAS layer, and investigated how synchronous firing dynamics was induced in the BAS layer. We further investigated the effect of the inter-layer interaction on a BAS to SFS repair mechanism by considering three types of neuron positioning (random, grid, and lognormal distributions) and two types of inter-layer connections (random and preferential connections). Among these scenarios, we concluded that the repair mechanism has the largest effect for a network with the lognormal neuron positioning and the preferential inter-layer connections.

  13. Growth kinetics of borided layers: Artificial neural network and least square approaches

    Science.gov (United States)

    Campos, I.; Islas, M.; Ramírez, G.; VillaVelázquez, C.; Mota, C.

    2007-05-01

    The present study evaluates the growth kinetics of the boride layer Fe 2B in AISI 1045 steel, by means of neural networks and the least square techniques. The Fe 2B phase was formed at the material surface using the paste boriding process. The surface boron potential was modified considering different boron paste thicknesses, with exposure times of 2, 4 and 6 h, and treatment temperatures of 1193, 1223 and 1273 K. The neural network and the least square models were set by the layer thickness of Fe 2B phase, and assuming that the growth of the boride layer follows a parabolic law. The reliability of the techniques used is compared with a set of experiments at a temperature of 1223 K with 5 h of treatment time and boron potentials of 2, 3, 4 and 5 mm. The results of the Fe 2B layer thicknesses show a mean error of 5.31% for the neural network and 3.42% for the least square method.

  14. Theoretical properties of the global optimizer of two layer neural network

    OpenAIRE

    Boob, Digvijay; Lan, Guanghui

    2017-01-01

    In this paper, we study the problem of optimizing a two-layer artificial neural network that best fits a training dataset. We look at this problem in the setting where the number of parameters is greater than the number of sampled points. We show that for a wide class of differentiable activation functions (this class involves "almost" all functions which are not piecewise linear), we have that first-order optimal solutions satisfy global optimality provided the hidden layer is non-singular. ...

  15. Selection of hidden layer nodes in neural networks by statistical tests

    International Nuclear Information System (INIS)

    Ciftcioglu, Ozer

    1992-05-01

    A statistical methodology for selection of the number of hidden layer nodes in feedforward neural networks is described. The method considers the network as an empirical model for the experimental data set subject to pattern classification so that the selection process becomes a model estimation through parameter identification. The solution is performed for an overdetermined estimation problem for identification using nonlinear least squares minimization technique. The number of the hidden layer nodes is determined as result of hypothesis testing. Accordingly the redundant network structure with respect to the number of parameters is avoided and the classification error being kept to a minimum. (author). 11 refs.; 4 figs.; 1 tab

  16. 3D Polygon Mesh Compression with Multi Layer Feed Forward Neural Networks

    Directory of Open Access Journals (Sweden)

    Emmanouil Piperakis

    2003-06-01

    Full Text Available In this paper, an experiment is conducted which proves that multi layer feed forward neural networks are capable of compressing 3D polygon meshes. Our compression method not only preserves the initial accuracy of the represented object but also enhances it. The neural network employed includes the vertex coordinates, the connectivity and normal information in one compact form, converting the discrete and surface polygon representation into an analytic, solid colloquial. Furthermore, the 3D object in its compressed neural form can be directly - without decompression - used for rendering. The neural compression - representation is viable to 3D transformations without the need of any anti-aliasing techniques - transformations do not disrupt the accuracy of the geometry. Our method does not su.er any scaling problem and was tested with objects of 300 to 107 polygons - such as the David of Michelangelo - achieving in all cases an order of O(b3 less bits for the representation than any other commonly known compression method. The simplicity of our algorithm and the established mathematical background of neural networks combined with their aptness for hardware implementation can establish this method as a good solution for polygon compression and if further investigated, a novel approach for 3D collision, animation and morphing.

  17. Hypothetical Pattern Recognition Design Using Multi-Layer Perceptorn Neural Network For Supervised Learning

    Directory of Open Access Journals (Sweden)

    Md. Abdullah-al-mamun

    2015-08-01

    Full Text Available Abstract Humans are capable to identifying diverse shape in the different pattern in the real world as effortless fashion due to their intelligence is grow since born with facing several learning process. Same way we can prepared an machine using human like brain called Artificial Neural Network that can be recognize different pattern from the real world object. Although the various techniques is exists to implementation the pattern recognition but recently the artificial neural network approaches have been giving the significant attention. Because the approached of artificial neural network is like a human brain that is learn from different observation and give a decision the previously learning rule. Over the 50 years research now a days pattern recognition for machine learning using artificial neural network got a significant achievement. For this reason many real world problem can be solve by modeling the pattern recognition process. The objective of this paper is to present the theoretical concept for pattern recognition design using Multi-Layer Perceptorn neural networkin the algorithm of artificial Intelligence as the best possible way of utilizing available resources to make a decision that can be a human like performance.

  18. Cardiac Arrhythmia Classification by Multi-Layer Perceptron and Convolution Neural Networks

    Directory of Open Access Journals (Sweden)

    Shalin Savalia

    2018-05-01

    Full Text Available The electrocardiogram (ECG plays an imperative role in the medical field, as it records heart signal over time and is used to discover numerous cardiovascular diseases. If a documented ECG signal has a certain irregularity in its predefined features, this is called arrhythmia, the types of which include tachycardia, bradycardia, supraventricular arrhythmias, and ventricular, etc. This has encouraged us to do research that consists of distinguishing between several arrhythmias by using deep neural network algorithms such as multi-layer perceptron (MLP and convolution neural network (CNN. The TensorFlow library that was established by Google for deep learning and machine learning is used in python to acquire the algorithms proposed here. The ECG databases accessible at PhysioBank.com and kaggle.com were used for training, testing, and validation of the MLP and CNN algorithms. The proposed algorithm consists of four hidden layers with weights, biases in MLP, and four-layer convolution neural networks which map ECG samples to the different classes of arrhythmia. The accuracy of the algorithm surpasses the performance of the current algorithms that have been developed by other cardiologists in both sensitivity and precision.

  19. Cardiac Arrhythmia Classification by Multi-Layer Perceptron and Convolution Neural Networks.

    Science.gov (United States)

    Savalia, Shalin; Emamian, Vahid

    2018-05-04

    The electrocardiogram (ECG) plays an imperative role in the medical field, as it records heart signal over time and is used to discover numerous cardiovascular diseases. If a documented ECG signal has a certain irregularity in its predefined features, this is called arrhythmia, the types of which include tachycardia, bradycardia, supraventricular arrhythmias, and ventricular, etc. This has encouraged us to do research that consists of distinguishing between several arrhythmias by using deep neural network algorithms such as multi-layer perceptron (MLP) and convolution neural network (CNN). The TensorFlow library that was established by Google for deep learning and machine learning is used in python to acquire the algorithms proposed here. The ECG databases accessible at PhysioBank.com and kaggle.com were used for training, testing, and validation of the MLP and CNN algorithms. The proposed algorithm consists of four hidden layers with weights, biases in MLP, and four-layer convolution neural networks which map ECG samples to the different classes of arrhythmia. The accuracy of the algorithm surpasses the performance of the current algorithms that have been developed by other cardiologists in both sensitivity and precision.

  20. A one-layer recurrent neural network for non-smooth convex optimization subject to linear inequality constraints

    International Nuclear Information System (INIS)

    Liu, Xiaolan; Zhou, Mi

    2016-01-01

    In this paper, a one-layer recurrent network is proposed for solving a non-smooth convex optimization subject to linear inequality constraints. Compared with the existing neural networks for optimization, the proposed neural network is capable of solving more general convex optimization with linear inequality constraints. The convergence of the state variables of the proposed neural network to achieve solution optimality is guaranteed as long as the designed parameters in the model are larger than the derived lower bounds.

  1. Classification of E-Nose Aroma Data of Four Fruit Types by ABC-Based Neural Network.

    Science.gov (United States)

    Adak, M Fatih; Yumusak, Nejat

    2016-02-27

    Electronic nose technology is used in many areas, and frequently in the beverage industry for classification and quality-control purposes. In this study, four different aroma data (strawberry, lemon, cherry, and melon) were obtained using a MOSES II electronic nose for the purpose of fruit classification. To improve the performance of the classification, the training phase of the neural network with two hidden layers was optimized using artificial bee colony algorithm (ABC), which is known to be successful in exploration. Test data were given to two different neural networks, each of which were trained separately with backpropagation (BP) and ABC, and average test performances were measured as 60% for the artificial neural network trained with BP and 76.39% for the artificial neural network trained with ABC. Training and test phases were repeated 30 times to obtain these average performance measurements. This level of performance shows that the artificial neural network trained with ABC is successful in classifying aroma data.

  2. A Fusion Face Recognition Approach Based on 7-Layer Deep Learning Neural Network

    Directory of Open Access Journals (Sweden)

    Jianzheng Liu

    2016-01-01

    Full Text Available This paper presents a method for recognizing human faces with facial expression. In the proposed approach, a motion history image (MHI is employed to get the features in an expressive face. The face can be seen as a kind of physiological characteristic of a human and the expressions are behavioral characteristics. We fused the 2D images of a face and MHIs which were generated from the same face’s image sequences with expression. Then the fusion features were used to feed a 7-layer deep learning neural network. The previous 6 layers of the whole network can be seen as an autoencoder network which can reduce the dimension of the fusion features. The last layer of the network can be seen as a softmax regression; we used it to get the identification decision. Experimental results demonstrated that our proposed method performs favorably against several state-of-the-art methods.

  3. Automatic detection of photoresist residual layer in lithography using a neural classification approach

    KAUST Repository

    Gereige, Issam

    2012-09-01

    Photolithography is a fundamental process in the semiconductor industry and it is considered as the key element towards extreme nanoscale integration. In this technique, a polymer photo sensitive mask with the desired patterns is created on the substrate to be etched. Roughly speaking, the areas to be etched are not covered with polymer. Thus, no residual layer should remain on these areas in order to insure an optimal transfer of the patterns on the substrate. In this paper, we propose a nondestructive method based on a classification approach achieved by artificial neural network for automatic residual layer detection from an ellipsometric signature. Only the case of regular defect, i.e. homogenous residual layer, will be considered. The limitation of the method will be discussed. Then, an experimental result on a 400 nm period grating manufactured with nanoimprint lithography is analyzed with our method. © 2012 Elsevier B.V. All rights reserved.

  4. Design of a universal two-layered neural network derived from the PLI theory

    Science.gov (United States)

    Hu, Chia-Lun J.

    2004-05-01

    The if-and-only-if (IFF) condition that a set of M analog-to-digital vector-mapping relations can be learned by a one-layered-feed-forward neural network (OLNN) is that all the input analog vectors dichotomized by the i-th output bit must be positively, linearly independent, or PLI. If they are not PLI, then the OLNN just cannot learn no matter what learning rules is employed because the solution of the connection matrix does not exist mathematically. However, in this case, one can still design a parallel-cascaded, two-layered, perceptron (PCTLP) to acheive this general mapping goal. The design principle of this "universal" neural network is derived from the major mathematical properties of the PLI theory - changing the output bits of the dependent relations existing among the dichotomized input vectors to make the PLD relations PLI. Then with a vector concatenation technique, the required mapping can still be learned by this PCTLP system with very high efficiency. This paper will report in detail the mathematical derivation of the general design principle and the design procedures of the PCTLP neural network system. It then will be verified in general by a practical numerical example.

  5. Foreground removal from WMAP 5 yr temperature maps using an MLP neural network

    DEFF Research Database (Denmark)

    Nørgaard-Nielsen, Hans Ulrik

    2010-01-01

    CMB signal makes it essential to minimize the systematic errors in the CMB temperature determinations. Methods. The feasibility of using simple neural networks to extract the CMB signal from detailed simulated data has already been demonstrated. Here, simple neural networks are applied to the WMAP 5...... yr temperature data without using any auxiliary data. Results. A simple multilayer perceptron neural network with two hidden layers provides temperature estimates over more than 75 per cent of the sky with random errors significantly below those previously extracted from these data. Also......, the systematic errors, i.e. errors correlated with the Galactic foregrounds, are very small. Conclusions. With these results the neural network method is well prepared for dealing with the high-quality CMB data from the ESA Planck Surveyor satellite. © ESO, 2010....

  6. Predicting carbonate permeabilities from wireline logs using a back-propagation neural network

    International Nuclear Information System (INIS)

    Wiener, J.M.; Moll, R.F.; Rogers, J.A.

    1991-01-01

    This paper explores the applicability of using Neural Networks to aid in the determination of carbonate permeability from wireline logs. Resistivity, interval transit time, neutron porosity, and bulk density logs form Texaco's Stockyard Creek Oil field were used as input to a specially designed neural network to predict core permeabilities in this carbonate reservoir. Also of interest was the comparison of the neural network's results to those of standard statistical techniques. The process of developing the neural network for this problem has shown that a good understanding of the data is required when creating the training set from which the network learns. This network was trained to learn core permeabilities from raw and transformed log data using a hyperbolic tangent transfer function and a sum of squares global error function. Also, it required two hidden layers to solve this particular problem

  7. A one-layer recurrent neural network for constrained pseudoconvex optimization and its application for dynamic portfolio optimization.

    Science.gov (United States)

    Liu, Qingshan; Guo, Zhishan; Wang, Jun

    2012-02-01

    In this paper, a one-layer recurrent neural network is proposed for solving pseudoconvex optimization problems subject to linear equality and bound constraints. Compared with the existing neural networks for optimization (e.g., the projection neural networks), the proposed neural network is capable of solving more general pseudoconvex optimization problems with equality and bound constraints. Moreover, it is capable of solving constrained fractional programming problems as a special case. The convergence of the state variables of the proposed neural network to achieve solution optimality is guaranteed as long as the designed parameters in the model are larger than the derived lower bounds. Numerical examples with simulation results illustrate the effectiveness and characteristics of the proposed neural network. In addition, an application for dynamic portfolio optimization is discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Assessing artificial neural network performance in estimating the layer properties of pavements

    Directory of Open Access Journals (Sweden)

    Gloria Inés Beltran

    2014-05-01

    Full Text Available A major concern in assessing the structural condition of existing flexible pavements is the estimation of the mechanical properties of constituent layers, which is useful for the design and decision-making process in road management systems. This parameter identification problem is truly complex due to the large number of variables involved in pavement behavior. To this end, non-conventional adaptive or approximate solutions via Artificial Neural Networks – ANNs – are considered to properly map pavement response field measurements. Previous investigations have demonstrated the exceptional ability of ANNs in layer moduli estimation from non-destructive deflection tests, but most of the reported cases were developed using synthetic deflection data or hypothetical pavement systems. This paper presents further attempts to back-calculate layer moduli via ANN modeling, using a database gathered from field tests performed on three- and four-layer pavement systems. Traditional layer structuring and pavements with a stabilized subbase were considered. A three-stage methodology is developed in this study to design and validate an “optimum” ANN-based model, i.e., the best architecture possible along with adequate learning rules. An assessment of the resulting ANN model demonstrates its forecasting capabilities and efficiency in solving a complex parameter identification problem concerning pavements.

  9. SpineCreator: a Graphical User Interface for the Creation of Layered Neural Models.

    Science.gov (United States)

    Cope, A J; Richmond, P; James, S S; Gurney, K; Allerton, D J

    2017-01-01

    There is a growing requirement in computational neuroscience for tools that permit collaborative model building, model sharing, combining existing models into a larger system (multi-scale model integration), and are able to simulate models using a variety of simulation engines and hardware platforms. Layered XML model specification formats solve many of these problems, however they are difficult to write and visualise without tools. Here we describe a new graphical software tool, SpineCreator, which facilitates the creation and visualisation of layered models of point spiking neurons or rate coded neurons without requiring the need for programming. We demonstrate the tool through the reproduction and visualisation of published models and show simulation results using code generation interfaced directly into SpineCreator. As a unique application for the graphical creation of neural networks, SpineCreator represents an important step forward for neuronal modelling.

  10. Estimation of effective connectivity using multi-layer perceptron artificial neural network.

    Science.gov (United States)

    Talebi, Nasibeh; Nasrabadi, Ali Motie; Mohammad-Rezazadeh, Iman

    2018-02-01

    Studies on interactions between brain regions estimate effective connectivity, (usually) based on the causality inferences made on the basis of temporal precedence. In this study, the causal relationship is modeled by a multi-layer perceptron feed-forward artificial neural network, because of the ANN's ability to generate appropriate input-output mapping and to learn from training examples without the need of detailed knowledge of the underlying system. At any time instant, the past samples of data are placed in the network input, and the subsequent values are predicted at its output. To estimate the strength of interactions, the measure of " Causality coefficient " is defined based on the network structure, the connecting weights and the parameters of hidden layer activation function. Simulation analysis demonstrates that the method, called "CREANN" (Causal Relationship Estimation by Artificial Neural Network), can estimate time-invariant and time-varying effective connectivity in terms of MVAR coefficients. The method shows robustness with respect to noise level of data. Furthermore, the estimations are not significantly influenced by the model order (considered time-lag), and the different initial conditions (initial random weights and parameters of the network). CREANN is also applied to EEG data collected during a memory recognition task. The results implicate that it can show changes in the information flow between brain regions, involving in the episodic memory retrieval process. These convincing results emphasize that CREANN can be used as an appropriate method to estimate the causal relationship among brain signals.

  11. Design of Jetty Piles Using Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Yongjei Lee

    2014-01-01

    Full Text Available To overcome the complication of jetty pile design process, artificial neural networks (ANN are adopted. To generate the training samples for training ANN, finite element (FE analysis was performed 50 times for 50 different design cases. The trained ANN was verified with another FE analysis case and then used as a structural analyzer. The multilayer neural network (MBPNN with two hidden layers was used for ANN. The framework of MBPNN was defined as the input with the lateral forces on the jetty structure and the type of piles and the output with the stress ratio of the piles. The results from the MBPNN agree well with those from FE analysis. Particularly for more complex modes with hundreds of different design cases, the MBPNN would possibly substitute parametric studies with FE analysis saving design time and cost.

  12. A design philosophy for multi-layer neural networks with applications to robot control

    Science.gov (United States)

    Vadiee, Nader; Jamshidi, MO

    1989-01-01

    A system is proposed which receives input information from many sensors that may have diverse scaling, dimension, and data representations. The proposed system tolerates sensory information with faults. The proposed self-adaptive processing technique has great promise in integrating the techniques of artificial intelligence and neural networks in an attempt to build a more intelligent computing environment. The proposed architecture can provide a detailed decision tree based on the input information, information stored in a long-term memory, and the adapted rule-based knowledge. A mathematical model for analysis will be obtained to validate the cited hypotheses. An extensive software program will be developed to simulate a typical example of pattern recognition problem. It is shown that the proposed model displays attention, expectation, spatio-temporal, and predictory behavior which are specific to the human brain. The anticipated results of this research project are: (1) creation of a new dynamic neural network structure, and (2) applications to and comparison with conventional multi-layer neural network structures. The anticipated benefits from this research are vast. The model can be used in a neuro-computer architecture as a building block which can perform complicated, nonlinear, time-varying mapping from a multitude of input excitory classes to an output or decision environment. It can be used for coordinating different sensory inputs and past experience of a dynamic system and actuating signals. The commercial applications of this project can be the creation of a special-purpose neuro-computer hardware which can be used in spatio-temporal pattern recognitions in such areas as air defense systems, e.g., target tracking, and recognition. Potential robotics-related applications are trajectory planning, inverse dynamics computations, hierarchical control, task-oriented control, and collision avoidance.

  13. Identification of determinants for globalization of SMEs using multi-layer perceptron neural networks

    International Nuclear Information System (INIS)

    Draz, U.; Jahanzaib, M.; Asghar, G.

    2016-01-01

    SMEs (Small and Medium Sized Enterprises) sector is facing problems relating to implementation of international quality standards. These SMEs need to identify factors affecting business success abroad for intelligent allocation of resources to the process of internationalization. In this paper, MLP NN (Multi-Layer Perceptron Neural Network) has been used for identifying relative importance of key variables related to firm basics, manufacturing, quality inspection labs and level of education in determining the exporting status of Pakistani SMEs. A survey has been conducted for scoring out the pertinent variables in SMEs and coded in MLP NNs. It is found that firm registered with OEM (Original Equipment Manufacturer) and size of firm are the most important in determining exporting status of SMEs followed by other variables. For internationalization, the results aid policy makers in formulating strategies. (author)

  14. Anomalous Signal Detection in ELF Band Electromagnetic Wave using Multi-layer Neural Network with Wavelet Decomposition

    Science.gov (United States)

    Itai, Akitoshi; Yasukawa, Hiroshi; Takumi, Ichi; Hata, Masayasu

    It is well known that electromagnetic waves radiated from the earth's crust are useful for predicting earthquakes. We analyze the electromagnetic waves received at the extremely low frequency band of 223Hz. These observed signals contain the seismic radiation from the earth's crust, but also include several undesired signals. Our research focuses on the signal detection technique to identify an anomalous signal corresponding to the seismic radiation in the observed signal. Conventional anomalous signal detections lack a wide applicability due to their assumptions, e.g. the digital data have to be observed at the same time or the same sensor. In order to overcome the limitation related to the observed signal, we proposed the anomalous signals detection based on a multi-layer neural network which is trained by digital data observed during a span of a day. In the neural network approach, training data do not need to be recorded at the same place or the same time. However, some noises, which have a large amplitude, are detected as the anomalous signal. This paper develops a multi-layer neural network to decrease the false detection of the anomalous signal from the electromagnetic wave. The training data for the proposed network is the decomposed signal of the observed signal during several days, since the seismic radiations are often recorded from several days to a couple of weeks. Results show that the proposed neural network is useful to achieve the accurate detection of the anomalous signal that indicates seismic activity.

  15. Neural Network Models for Free Radical Polymerization of Methyl Methacrylate

    International Nuclear Information System (INIS)

    Curteanu, S.; Leon, F.; Galea, D.

    2003-01-01

    In this paper, a neural network modeling of the batch bulk methyl methacrylate polymerization is performed. To obtain conversion, number and weight average molecular weights, three neural networks were built. Each was a multilayer perception with one or two hidden layers. The choice of network topology, i.e. the number of hidden layers and the number of neurons in these layers, was based on achieving a compromise between precision and complexity. Thus, it was intended to have an error as small as possible at the end of back-propagation training phases, while using a network with reduced complexity. The performances of the networks were evaluated by comparing network predictions with training data, validation data (which were not uses for training), and with the results of a mechanistic model. The accurate predictions of neural networks for monomer conversion, number average molecular weight and weight average molecular weight proves that this modeling methodology gives a good representation and generalization of the batch bulk methyl methacrylate polymerization. (author)

  16. Layers and Multilayers of Self-Assembled Polymers: Tunable Engineered Extracellular Matrix Coatings for Neural Cell Growth.

    Science.gov (United States)

    Landry, Michael J; Rollet, Frédéric-Guillaume; Kennedy, Timothy E; Barrett, Christopher J

    2018-03-12

    Growing primary cells and tissue in long-term cultures, such as primary neural cell culture, presents many challenges. A critical component of any environment that supports neural cell growth in vivo is an appropriate 2-D surface or 3-D scaffold, typically in the form of a thin polymer layer that coats an underlying plastic or glass substrate and aims to mimic critical aspects of the extracellular matrix. A fundamental challenge to mimicking a hydrophilic, soft natural cell environment is that materials with these properties are typically fragile and are difficult to adhere to and stabilize on an underlying plastic or glass cell culture substrate. In this review, we highlight the current state of the art and overview recent developments of new artificial extracellular matrix (ECM) surfaces for in vitro neural cell culture. Notably, these materials aim to strike a balance between being hydrophilic and soft while also being thick, stable, robust, and bound well to the underlying surface to provide an effective surface to support long-term cell growth. We focus on improved surface and scaffold coating systems that can mimic the natural physicochemical properties that enhance neuronal survival and growth, applied as soft hydrophilic polymer coatings for both in vitro cell culture and for implantable neural probes and 3-D matrixes that aim to enhance stability and longevity to promote neural biocompatibility in vivo. With respect to future developments, we outline four emerging principles that serve to guide the development of polymer assemblies that function well as artificial ECMs: (a) design inspired by biological systems and (b) the employment of principles of aqueous soft bonding and self-assembly to achieve (c) a high-water-content gel-like coating that is stable over time in a biological environment and possesses (d) a low modulus to more closely mimic soft, compliant real biological tissue. We then highlight two emerging classes of thick material coatings that

  17. Handwritten Devanagari Character Recognition Using Layer-Wise Training of Deep Convolutional Neural Networks and Adaptive Gradient Methods

    Directory of Open Access Journals (Sweden)

    Mahesh Jangid

    2018-02-01

    Full Text Available Handwritten character recognition is currently getting the attention of researchers because of possible applications in assisting technology for blind and visually impaired users, human–robot interaction, automatic data entry for business documents, etc. In this work, we propose a technique to recognize handwritten Devanagari characters using deep convolutional neural networks (DCNN which are one of the recent techniques adopted from the deep learning community. We experimented the ISIDCHAR database provided by (Information Sharing Index ISI, Kolkata and V2DMDCHAR database with six different architectures of DCNN to evaluate the performance and also investigate the use of six recently developed adaptive gradient methods. A layer-wise technique of DCNN has been employed that helped to achieve the highest recognition accuracy and also get a faster convergence rate. The results of layer-wise-trained DCNN are favorable in comparison with those achieved by a shallow technique of handcrafted features and standard DCNN.

  18. Using Hybrid Algorithm to Improve Intrusion Detection in Multi Layer Feed Forward Neural Networks

    Science.gov (United States)

    Ray, Loye Lynn

    2014-01-01

    The need for detecting malicious behavior on a computer networks continued to be important to maintaining a safe and secure environment. The purpose of this study was to determine the relationship of multilayer feed forward neural network architecture to the ability of detecting abnormal behavior in networks. This involved building, training, and…

  19. Precision requirements for single-layer feed-forward neural networks

    NARCIS (Netherlands)

    Annema, Anne J.; Hoen, K.; Hoen, Klaas; Wallinga, Hans

    1994-01-01

    This paper presents a mathematical analysis of the effect of limited precision analog hardware for weight adaptation to be used in on-chip learning feedforward neural networks. Easy-to-read equations and simple worst-case estimations for the maximum tolerable imprecision are presented. As an

  20. Object recognition using deep convolutional neural networks with complete transfer and partial frozen layers

    NARCIS (Netherlands)

    Kruithof, M.C.; Bouma, H.; Fischer, N.M.; Schutte, K.

    2016-01-01

    Object recognition is important to understand the content of video and allow flexible querying in a large number of cameras, especially for security applications. Recent benchmarks show that deep convolutional neural networks are excellent approaches for object recognition. This paper describes an

  1. Internal-state analysis in layered artificial neural network trained to categorize lung sounds

    NARCIS (Netherlands)

    Oud, M

    2002-01-01

    In regular use of artificial neural networks, only input and output states of the network are known to the user. Weight and bias values can be extracted but are difficult to interpret. We analyzed internal states of networks trained to map asthmatic lung sound spectra onto lung function parameters.

  2. Accurate estimation of CO2 adsorption on activated carbon with multi-layer feed-forward neural network (MLFNN algorithm

    Directory of Open Access Journals (Sweden)

    Alireza Rostami

    2018-03-01

    Full Text Available Global warming due to greenhouse effect has been considered as a serious problem for many years around the world. Among the different gases which cause greenhouse gas effect, carbon dioxide is of great difficulty by entering into the surrounding atmosphere. So CO2 capturing and separation especially by adsorption is one of the most interesting approaches because of the low equipment cost, ease of operation, simplicity of design, and low energy consumption.In this study, experimental results are presented for the adsorption equilibria of carbon dioxide on activated carbon. The adsorption equilibrium data for carbon dioxide were predicted with two commonly used isotherm models in order to compare with multi-layer feed-forward neural network (MLFNN algorithm for a wide range of partial pressure. As a result, the ANN-based algorithm shows much better efficiency and accuracy than the Sips and Langmuir isotherms. In addition, the applicability of the Sips and Langmuir models are limited to isothermal conditions, even though the ANN-based algorithm is not restricted to the constant temperature condition. Consequently, it is proved that MLFNN algorithm is a promising model for calculation of CO2 adsorption density on activated carbon. Keywords: Global warming, CO2 adsorption, Activated carbon, Multi-layer feed-forward neural network algorithm, Statistical quality measures

  3. Artificial neural network modeling of jatropha oil fueled diesel engine for emission predictions

    Directory of Open Access Journals (Sweden)

    Ganapathy Thirunavukkarasu

    2009-01-01

    Full Text Available This paper deals with artificial neural network modeling of diesel engine fueled with jatropha oil to predict the unburned hydrocarbons, smoke, and NOx emissions. The experimental data from the literature have been used as the data base for the proposed neural network model development. For training the networks, the injection timing, injector opening pressure, plunger diameter, and engine load are used as the input layer. The outputs are hydrocarbons, smoke, and NOx emissions. The feed forward back propagation learning algorithms with two hidden layers are used in the networks. For each output a different network is developed with required topology. The artificial neural network models for hydrocarbons, smoke, and NOx emissions gave R2 values of 0.9976, 0.9976, and 0.9984 and mean percent errors of smaller than 2.7603, 4.9524, and 3.1136, respectively, for training data sets, while the R2 values of 0.9904, 0.9904, and 0.9942, and mean percent errors of smaller than 6.5557, 6.1072, and 4.4682, respectively, for testing data sets. The best linear fit of regression to the artificial neural network models of hydrocarbons, smoke, and NOx emissions gave the correlation coefficient values of 0.98, 0.995, and 0.997, respectively.

  4. Direct and inverse neural networks modelling applied to study the influence of the gas diffusion layer properties on PBI-based PEM fuel cells

    Energy Technology Data Exchange (ETDEWEB)

    Lobato, Justo; Canizares, Pablo; Rodrigo, Manuel A.; Linares, Jose J. [Chemical Engineering Department, University of Castilla-La Mancha, Campus Universitario s/n, 13004 Ciudad Real (Spain); Piuleac, Ciprian-George; Curteanu, Silvia [Faculty of Chemical Engineering and Environmental Protection, Department of Chemical Engineering, ' ' Gh. Asachi' ' Technical University Iasi Bd. D. Mangeron, No. 71A, 700050 IASI (Romania)

    2010-08-15

    This article shows the application of a very useful mathematical tool, artificial neural networks, to predict the fuel cells results (the value of the tortuosity and the cell voltage, at a given current density, and therefore, the power) on the basis of several properties that define a Gas Diffusion Layer: Teflon content, air permeability, porosity, mean pore size, hydrophobia level. Four neural networks types (multilayer perceptron, generalized feedforward network, modular neural network, and Jordan-Elman neural network) have been applied, with a good fitting between the predicted and the experimental values in the polarization curves. A simple feedforward neural network with one hidden layer proved to be an accurate model with good generalization capability (error about 1% in the validation phase). A procedure based on inverse neural network modelling was able to determine, with small errors, the initial conditions leading to imposed values for characteristics of the fuel cell. In addition, the use of this tool has been proved to be very attractive in order to predict the cell performance, and more interestingly, the influence of the properties of the gas diffusion layer on the cell performance, allowing possible enhancements of this material by changing some of its properties. (author)

  5. Real-Time Transportation Mode Identification Using Artificial Neural Networks Enhanced with Mode Availability Layers: A Case Study in Dubai

    Directory of Open Access Journals (Sweden)

    Young-Ji Byon

    2017-09-01

    Full Text Available Traditionally, departments of transportation (DOTs have dispatched probe vehicles with dedicated vehicles and drivers for monitoring traffic conditions. Emerging assisted GPS (AGPS and accelerometer-equipped smartphones offer new sources of raw data that arise from voluntarily-traveling smartphone users provided that their modes of transportation can correctly be identified. By introducing additional raster map layers that indicate the availability of each mode, it is possible to enhance the accuracy of mode detection results. Even in its simplest form, an artificial neural network (ANN excels at pattern recognition with a relatively short processing timeframe once it is properly trained, which is suitable for real-time mode identification purposes. Dubai is one of the major cities in the Middle East and offers unique environments, such as a high density of extremely high-rise buildings that may introduce multi-path errors with GPS signals. This paper develops real-time mode identification ANNs enhanced with proposed mode availability geographic information system (GIS layers, firstly for a universal mode detection and, secondly for an auto mode detection for the particular intelligent transportation system (ITS application of traffic monitoring, and compares the results with existing approaches. It is found that ANN-based real-time mode identification, enhanced by mode availability GIS layers, significantly outperforms the existing methods.

  6. Predictions of SEP events by means of a linear filter and layer-recurrent neural network

    Czech Academy of Sciences Publication Activity Database

    Valach, F.; Revallo, M.; Hejda, Pavel; Bochníček, Josef

    2011-01-01

    Roč. 69, č. 9-10 (2011), s. 758-766 ISSN 0094-5765 R&D Projects: GA AV ČR(CZ) IAA300120608; GA MŠk OC09070 Grant - others:VEGA(SK) 2/0015/11; VEGA(SK) 2/0022/11 Institutional research plan: CEZ:AV0Z30120515 Keywords : coronal mass ejection * X-ray flare * solar energetic particles * artificial neural network Subject RIV: DE - Earth Magnetism, Geodesy, Geography Impact factor: 0.614, year: 2011

  7. Exploiting Hidden Layer Responses of Deep Neural Networks for Language Recognition

    Science.gov (United States)

    2016-09-08

    Target Languages Arabic (ara) Egyptian , Iraqi, Levantine, Maghrebi,Modern Standard Chinese (chi) Cantonese, Mandarin, Min, Wu English (eng) British...Frame-by-frame DNN classification x1 x2 x3 xT-­1xT Figure 1: Frame-by-frame DNN Language Identification Figure 1 shows the architecture of the DNN...compare direct DNN system with proposed DNN I-vector system, we trained a single neural network to classify all 20 languages. The architecture of this

  8. Crop Classification by Forward Neural Network with Adaptive Chaotic Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Yudong Zhang

    2011-05-01

    Full Text Available This paper proposes a hybrid crop classifier for polarimetric synthetic aperture radar (SAR images. The feature sets consisted of span image, the H/A/α decomposition, and the gray-level co-occurrence matrix (GLCM based texture features. Then, the features were reduced by principle component analysis (PCA. Finally, a two-hidden-layer forward neural network (NN was constructed and trained by adaptive chaotic particle swarm optimization (ACPSO. K-fold cross validation was employed to enhance generation. The experimental results on Flevoland sites demonstrate the superiority of ACPSO to back-propagation (BP, adaptive BP (ABP, momentum BP (MBP, Particle Swarm Optimization (PSO, and Resilient back-propagation (RPROP methods. Moreover, the computation time for each pixel is only 1.08 × 10−7 s.

  9. Predicting Subsurface Soil Layering and Landslide Risk with Artificial Neural Networks

    DEFF Research Database (Denmark)

    Farrokhzad, Farzad; Barari, Amin; Ibsen, Lars Bo

    2011-01-01

    This paper is concerned principally with the application of ANN model in geotechnical engineering. In particular the application for subsurface soil layering and landslide analysis is discussed in more detail. Three ANN models are trained using the required geotechnical data obtained from...... networks are capable of predicting variations in the soil profile and assessing the landslide hazard with an acceptable level of confidence....

  10. Super-resolution using a light inception layer in convolutional neural network

    Science.gov (United States)

    Mou, Qinyang; Guo, Jun

    2018-04-01

    Recently, several models based on CNN architecture have achieved great result on Single Image Super-Resolution (SISR) problem. In this paper, we propose an image super-resolution method (SR) using a light inception layer in convolutional network (LICN). Due to the strong representation ability of our well-designed inception layer that can learn richer representation with less parameters, we can build our model with shallow architecture that can reduce the effect of vanishing gradients problem and save computational costs. Our model strike a balance between computational speed and the quality of the result. Compared with state-of-the-art result, we produce comparable or better results with faster computational speed.

  11. Appropriateness of Dropout Layers and Allocation of Their 0.5 Rates across Convolutional Neural Networks for CIFAR-10, EEACL26, and NORB Datasets

    OpenAIRE

    Romanuke Vadim V.

    2017-01-01

    A technique of DropOut for preventing overfitting of convolutional neural networks for image classification is considered in the paper. The goal is to find a rule of rationally allocating DropOut layers of 0.5 rate to maximise performance. To achieve the goal, two common network architectures are used having either 4 or 5 convolutional layers. Benchmarking is fulfilled with CIFAR-10, EEACL26, and NORB datasets. Initially, series of all admissible versions for allocation of DropOut layers are ...

  12. Foreground removal from WMAP 5 yr temperature maps using an MLP neural network

    Science.gov (United States)

    Nørgaard-Nielsen, H. U.

    2010-09-01

    Aims: One of the main obstacles for extracting the cosmic microwave background (CMB) signal from observations in the mm/sub-mm range is the foreground contamination by emission from Galactic component: mainly synchrotron, free-free, and thermal dust emission. The statistical nature of the intrinsic CMB signal makes it essential to minimize the systematic errors in the CMB temperature determinations. Methods: The feasibility of using simple neural networks to extract the CMB signal from detailed simulated data has already been demonstrated. Here, simple neural networks are applied to the WMAP 5 yr temperature data without using any auxiliary data. Results: A simple multilayer perceptron neural network with two hidden layers provides temperature estimates over more than 75 per cent of the sky with random errors significantly below those previously extracted from these data. Also, the systematic errors, i.e. errors correlated with the Galactic foregrounds, are very small. Conclusions: With these results the neural network method is well prepared for dealing with the high - quality CMB data from the ESA Planck Surveyor satellite. unknown author type, collab

  13. Forecast of TEXT plasma disruptions using soft X rays as input signal in a neural network

    International Nuclear Information System (INIS)

    Vannucci, A.; Oliveira, K.A.; Tajima, T.

    1999-01-01

    A feedforward neural network with two hidden layers is used to forecast major and minor disruptive instabilities in TEXT tokamak discharges. Using the experimental data of soft X ray signals as input data, the neural network is trained with one disruptive plasma discharge, and a different disruptive discharge is used for validation. After being properly trained, the networks, with the same set of weights, are used to forecast disruptions in two other plasma discharges. It is observed that the neural network is able to predict the occurrence of a disruption more than 3 ms in advance. This time interval is almost 3 times longer than the one already obtained previously when a magnetic signal from a Mirnov coil was used to feed the neural networks. Visually no indication of an upcoming disruption is seen from the experimental data this far back from the time of disruption. Finally, by observing the predictive behaviour of the network for the disruptive discharges analysed and comparing the soft X ray data with the corresponding magnetic experimental signal, it is conjectured about where inside the plasma column the disruption first started. (author)

  14. Classification of E-Nose Aroma Data of Four Fruit Types by ABC-Based Neural Network

    Directory of Open Access Journals (Sweden)

    M. Fatih Adak

    2016-02-01

    Full Text Available Electronic nose technology is used in many areas, and frequently in the beverage industry for classification and quality-control purposes. In this study, four different aroma data (strawberry, lemon, cherry, and melon were obtained using a MOSES II electronic nose for the purpose of fruit classification. To improve the performance of the classification, the training phase of the neural network with two hidden layers was optimized using artificial bee colony algorithm (ABC, which is known to be successful in exploration. Test data were given to two different neural networks, each of which were trained separately with backpropagation (BP and ABC, and average test performances were measured as 60% for the artificial neural network trained with BP and 76.39% for the artificial neural network trained with ABC. Training and test phases were repeated 30 times to obtain these average performance measurements. This level of performance shows that the artificial neural network trained with ABC is successful in classifying aroma data.

  15. Storage capacity of multi-layered neural networks with binary weights

    International Nuclear Information System (INIS)

    Tarkowski, W.; Hemmen, J.L. van

    1997-01-01

    Using statistical physics methods we investigate two-layered perceptrons which consist of N binary input neurons, K hidden units and a single output node. Four basic types of such networks are considered: the so-called Committee, Parity, and AND Machines which makes a decision based on a majority, parity, and the logical AND rules, respectively (for these cases the weights that connect hidden units and output node are taken to be equal to one), and the General Machine where one allows all the synaptic couplings to vary. For these kinds of network we examine two types of architecture: fully connected and three-connected ones (with overlapping and non-overlapping receptive fields, respectively). All the above mentioned machines heave binary weights. Our basic interest is focused on the storage capabilities of such networks which realize p= αN random, unbiased dichotomies (α denotes the so-called storage ratio). The analysis is done using the annealed approximation and is valid for all values of K. The critical (maximal) storage capacity of the fully connected Committee Machine reads α c =K, while in the case of the three-structure one gets α c =1, independent of K. The results obtained for the Parity Machine are exactly the same as those for the Committee network. The optimal storage of the AND Machine depends on distribution of the outputs for the patterns. These associations are studied in detail. We have found also that the capacity of the General Machines remains the same as compared to systems with fixed weights between intermediate layer and the output node. Some of the findings (especially those concerning the storage capacity of the Parity Machine) are in a good agreement with known numerical results. (author)

  16. SU-F-E-09: Respiratory Signal Prediction Based On Multi-Layer Perceptron Neural Network Using Adjustable Training Samples

    Energy Technology Data Exchange (ETDEWEB)

    Sun, W; Jiang, M; Yin, F [Duke University Medical Center, Durham, NC (United States)

    2016-06-15

    Purpose: Dynamic tracking of moving organs, such as lung and liver tumors, under radiation therapy requires prediction of organ motions prior to delivery. The shift of moving organ may change a lot due to huge transform of respiration at different periods. This study aims to reduce the influence of that changes using adjustable training signals and multi-layer perceptron neural network (ASMLP). Methods: Respiratory signals obtained using a Real-time Position Management(RPM) device were used for this study. The ASMLP uses two multi-layer perceptron neural networks(MLPs) to infer respiration position alternately and the training sample will be updated with time. Firstly, a Savitzky-Golay finite impulse response smoothing filter was established to smooth the respiratory signal. Secondly, two same MLPs were developed to estimate respiratory position from its previous positions separately. Weights and thresholds were updated to minimize network errors according to Leverberg-Marquart optimization algorithm through backward propagation method. Finally, MLP 1 was used to predict 120∼150s respiration position using 0∼120s training signals. At the same time, MLP 2 was trained using 30∼150s training signals. Then MLP is used to predict 150∼180s training signals according to 30∼150s training signals. The respiration position is predicted as this way until it was finished. Results: In this experiment, the two methods were used to predict 2.5 minute respiratory signals. For predicting 1s ahead of response time, correlation coefficient was improved from 0.8250(MLP method) to 0.8856(ASMLP method). Besides, a 30% improvement of mean absolute error between MLP(0.1798 on average) and ASMLP(0.1267 on average) was achieved. For predicting 2s ahead of response time, correlation coefficient was improved from 0.61415 to 0.7098.Mean absolute error of MLP method(0.3111 on average) was reduced by 35% using ASMLP method(0.2020 on average). Conclusion: The preliminary results

  17. Entropy-Based Application Layer DDoS Attack Detection Using Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Khundrakpam Johnson Singh

    2016-10-01

    Full Text Available Distributed denial-of-service (DDoS attack is one of the major threats to the web server. The rapid increase of DDoS attacks on the Internet has clearly pointed out the limitations in current intrusion detection systems or intrusion prevention systems (IDS/IPS, mostly caused by application-layer DDoS attacks. Within this context, the objective of the paper is to detect a DDoS attack using a multilayer perceptron (MLP classification algorithm with genetic algorithm (GA as learning algorithm. In this work, we analyzed the standard EPA-HTTP (environmental protection agency-hypertext transfer protocol dataset and selected the parameters that will be used as input to the classifier model for differentiating the attack from normal profile. The parameters selected are the HTTP GET request count, entropy, and variance for every connection. The proposed model can provide a better accuracy of 98.31%, sensitivity of 0.9962, and specificity of 0.0561 when compared to other traditional classification models.

  18. Daily global solar radiation modelling using multi-layer perceptron neural networks in semi-arid region

    Directory of Open Access Journals (Sweden)

    Mawloud GUERMOUI

    2016-07-01

    Full Text Available Accurate estimation of Daily Global Solar Radiation (DGSR has been a major goal for solar energy application. However, solar radiation measurements are not a simple task for several reasons. In the cases where data are not available, it is very common the use of computational models to estimate the missing data, which are based mainly of the search for relationships between weather variables, such as temperature, humidity, sunshine duration, etc. In this respect, the present study focuses on the development of artificial neural network (ANN model for estimation of daily global solar radiation on horizontal surface in Ghardaia city (South Algeria. In this analysis back-propagation algorithm is applied. Daily mean air temperature, relative humidity and sunshine duration was used as climatic inputs parameters, while the daily global solar radiation (DGSR was the only output of the ANN. We have evaluated Multi-Layer Perceptron (MLP models to estimate DGSR using three year of measurement (2005-2008. It was found that MLP-model based on sunshine duration and mean air temperature give accurate results in term of Mean Absolute Bias Error, Root Mean Square Error, Relative Square Error and Correlation Coefficient. The obtained values of these indicators are 0.67 MJ/m², 1.28 MJ/m², 6.12%and 98.18%, respectively which shows that MLP is highly qualified for DGSR estimation in semi-arid climates.

  19. Liver Tumor Segmentation from MR Images Using 3D Fast Marching Algorithm and Single Hidden Layer Feedforward Neural Network

    Directory of Open Access Journals (Sweden)

    Trong-Ngoc Le

    2016-01-01

    Full Text Available Objective. Our objective is to develop a computerized scheme for liver tumor segmentation in MR images. Materials and Methods. Our proposed scheme consists of four main stages. Firstly, the region of interest (ROI image which contains the liver tumor region in the T1-weighted MR image series was extracted by using seed points. The noise in this ROI image was reduced and the boundaries were enhanced. A 3D fast marching algorithm was applied to generate the initial labeled regions which are considered as teacher regions. A single hidden layer feedforward neural network (SLFN, which was trained by a noniterative algorithm, was employed to classify the unlabeled voxels. Finally, the postprocessing stage was applied to extract and refine the liver tumor boundaries. The liver tumors determined by our scheme were compared with those manually traced by a radiologist, used as the “ground truth.” Results. The study was evaluated on two datasets of 25 tumors from 16 patients. The proposed scheme obtained the mean volumetric overlap error of 27.43% and the mean percentage volume error of 15.73%. The mean of the average surface distance, the root mean square surface distance, and the maximal surface distance were 0.58 mm, 1.20 mm, and 6.29 mm, respectively.

  20. Classification of Alzheimer's Disease Based on Eight-Layer Convolutional Neural Network with Leaky Rectified Linear Unit and Max Pooling.

    Science.gov (United States)

    Wang, Shui-Hua; Phillips, Preetha; Sui, Yuxiu; Liu, Bin; Yang, Ming; Cheng, Hong

    2018-03-26

    Alzheimer's disease (AD) is a progressive brain disease. The goal of this study is to provide a new computer-vision based technique to detect it in an efficient way. The brain-imaging data of 98 AD patients and 98 healthy controls was collected using data augmentation method. Then, convolutional neural network (CNN) was used, CNN is the most successful tool in deep learning. An 8-layer CNN was created with optimal structure obtained by experiences. Three activation functions (AFs): sigmoid, rectified linear unit (ReLU), and leaky ReLU. The three pooling-functions were also tested: average pooling, max pooling, and stochastic pooling. The numerical experiments demonstrated that leaky ReLU and max pooling gave the greatest result in terms of performance. It achieved a sensitivity of 97.96%, a specificity of 97.35%, and an accuracy of 97.65%, respectively. In addition, the proposed approach was compared with eight state-of-the-art approaches. The method increased the classification accuracy by approximately 5% compared to state-of-the-art methods.

  1. Power level control of the TRIGA Mark-II research reactor using the multifeedback layer neural network and the particle swarm optimization

    International Nuclear Information System (INIS)

    Coban, Ramazan

    2014-01-01

    Highlights: • A multifeedback-layer neural network controller is presented for a research reactor. • Off-line learning of the MFLNN is accomplished by the PSO algorithm. • The results revealed that the MFLNN–PSO controller has a remarkable performance. - Abstract: In this paper, an artificial neural network controller is presented using the Multifeedback-Layer Neural Network (MFLNN), which is a recently proposed recurrent neural network, for neutronic power level control of a nuclear research reactor. Off-line learning of the MFLNN is accomplished by the Particle Swarm Optimization (PSO) algorithm. The MFLNN-PSO controller design is based on a nonlinear model of the TRIGA Mark-II research reactor. The learning and the test processes are implemented by means of a computer program at different power levels. The simulation results obtained reveal that the MFLNN-PSO controller has a remarkable performance on the neutronic power level control of the reactor for tracking the step reference power trajectories

  2. Appropriateness of Dropout Layers and Allocation of Their 0.5 Rates across Convolutional Neural Networks for CIFAR-10, EEACL26, and NORB Datasets

    Directory of Open Access Journals (Sweden)

    Romanuke Vadim V.

    2017-12-01

    Full Text Available A technique of DropOut for preventing overfitting of convolutional neural networks for image classification is considered in the paper. The goal is to find a rule of rationally allocating DropOut layers of 0.5 rate to maximise performance. To achieve the goal, two common network architectures are used having either 4 or 5 convolutional layers. Benchmarking is fulfilled with CIFAR-10, EEACL26, and NORB datasets. Initially, series of all admissible versions for allocation of DropOut layers are generated. After the performance against the series is evaluated, normalized and averaged, the compromising rule is found. It consists in non-compactly inserting a few DropOut layers before the last convolutional layer. It is likely that the scheme with two or more DropOut layers fits networks of many convolutional layers for image classification problems with a plenty of features. Such a scheme shall also fit simple datasets prone to overfitting. In fact, the rule “prefers” a fewer number of DropOut layers. The exemplary gain of the rule application is roughly between 10 % and 50 %.

  3. Forcast of TEXT plasma disruptions using soft X-rays as input signal in a neural network

    International Nuclear Information System (INIS)

    Vannucci, A.; Oliveira, K.A.; Tajima, T.

    1998-02-01

    A feed-forward neural network with two hidden layers is used in this work to forecast major and minor disruptive instabilities in TEXT discharges. Using soft X-ray signals as input data, the neural net is trained with one disruptive plasma pulse, and a different disruptive discharge is used for validation. After being properly trained the networks, with the same set of weights. is then used to forecast disruptions in two others different plasma pulses. It is observed that the neural net is able to predict the incoming of a disruption more than 3 ms in advance. This time interval is almost three times longer than the one already obtained previously when magnetic signal from a Mirnov coil was used to feed the neural networks with. To our own eye we fail to see any indication of an upcoming disruption from the experimental data this far back from the time of disruption. Finally, from what we observe in the predictive behavior of our network, speculations are made whether the disruption triggering mechanism would be associated to an increase of the m = 2 magnetic island, that disturbs the central part of the plasma column afterwards or, in face of the results from this work, the initial perturbation would have occurred first in the central part of the plasma column, within the q = 1 magnetic surface, and then the m = 2 MHD mode would be destabilized afterwards

  4. Classification of Atrial Septal Defect and Ventricular Septal Defect with Documented Hemodynamic Parameters via Cardiac Catheterization by Genetic Algorithms and Multi-Layered Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Mustafa Yıldız

    2012-08-01

    Full Text Available Introduction: We aimed to develop a classification method to discriminate ventricular septal defect and atrial septal defect by using severalhemodynamic parameters.Patients and Methods: Forty three patients (30 atrial septal defect, 13 ventricular septal defect; 26 female, 17 male with documentedhemodynamic parameters via cardiac catheterization are included to study. Such parameters as blood pressure values of different areas,gender, age and Qp/Qs ratios are used for classification. Parameters, we used in classification are determined by divergence analysismethod. Those parameters are; i pulmonary artery diastolic pressure, ii Qp/Qs ratio, iii right atrium pressure, iv age, v pulmonary arterysystolic pressure, vi left ventricular sistolic pressure, vii aorta mean pressure, viii left ventricular diastolic pressure, ix aorta diastolicpressure, x aorta systolic pressure. Those parameters detected from our study population, are uploaded to multi-layered artificial neuralnetwork and the network was trained by genetic algorithm.Results: Trained cluster consists of 14 factors (7 atrial septal defect and 7 ventricular septal defect. Overall success ratio is 79.2%, andwith a proper instruction of artificial neural network this ratio increases up to 89%.Conclusion: Parameters, belonging to artificial neural network, which are needed to be detected by the investigator in classical methods,can easily be detected with the help of genetic algorithms. During the instruction of artificial neural network by genetic algorithms, boththe topology of network and factors of network can be determined. During the test stage, elements, not included in instruction cluster, areassumed as in test cluster, and as a result of this study, we observed that multi-layered artificial neural network can be instructed properly,and neural network is a successful method for aimed classification.

  5. Estimating wheat and maize daily evapotranspiration using artificial neural network

    Science.gov (United States)

    Abrishami, Nazanin; Sepaskhah, Ali Reza; Shahrokhnia, Mohammad Hossein

    2018-02-01

    In this research, artificial neural network (ANN) is used for estimating wheat and maize daily standard evapotranspiration. Ten ANN models with different structures were designed for each crop. Daily climatic data [maximum temperature (T max), minimum temperature (T min), average temperature (T ave), maximum relative humidity (RHmax), minimum relative humidity (RHmin), average relative humidity (RHave), wind speed (U 2), sunshine hours (n), net radiation (Rn)], leaf area index (LAI), and plant height (h) were used as inputs. For five structures of ten, the evapotranspiration (ETC) values calculated by ETC = ET0 × K C equation (ET0 from Penman-Monteith equation and K C from FAO-56, ANNC) were used as outputs, and for the other five structures, the ETC values measured by weighing lysimeter (ANNM) were used as outputs. In all structures, a feed forward multiple-layer network with one or two hidden layers and sigmoid transfer function and BR or LM training algorithm was used. Favorite network was selected based on various statistical criteria. The results showed the suitable capability and acceptable accuracy of ANNs, particularly those having two hidden layers in their structure in estimating the daily evapotranspiration. Best model for estimation of maize daily evapotranspiration is «M»ANN1 C (8-4-2-1), with T max, T min, RHmax, RHmin, U 2, n, LAI, and h as input data and LM training rule and its statistical parameters (NRMSE, d, and R2) are 0.178, 0.980, and 0.982, respectively. Best model for estimation of wheat daily evapotranspiration is «W»ANN5 C (5-2-3-1), with T max, T min, Rn, LAI, and h as input data and LM training rule, its statistical parameters (NRMSE, d, and R 2) are 0.108, 0.987, and 0.981 respectively. In addition, if the calculated ETC used as the output of the network for both wheat and maize, higher accurate estimation was obtained. Therefore, ANN is suitable method for estimating evapotranspiration of wheat and maize.

  6. Single Layer Recurrent Neural Network for detection of swarm-like earthquakes in W-Bohemia/Vogtland - the method

    Czech Academy of Sciences Publication Activity Database

    Doubravová, Jana; Wiszniowski, J.; Horálek, Josef

    2016-01-01

    Roč. 93, August (2016), s. 138-149 ISSN 0098-3004 R&D Projects: GA ČR GAP210/12/2336; GA MŠk LM2010008 Institutional support: RVO:67985530 Keywords : event detection * artificial neural network * West Bohemia/Vogtland Subject RIV: DC - Siesmology, Volcanology, Earth Structure Impact factor: 2.533, year: 2016

  7. Lifetime assessment of atomic-layer-deposited Al2O3-Parylene C bilayer coating for neural interfaces using accelerated age testing and electrochemical characterization.

    Science.gov (United States)

    Minnikanti, Saugandhika; Diao, Guoqing; Pancrazio, Joseph J; Xie, Xianzong; Rieth, Loren; Solzbacher, Florian; Peixoto, Nathalia

    2014-02-01

    The lifetime and stability of insulation are critical features for the reliable operation of an implantable neural interface device. A critical factor for an implanted insulation's performance is its barrier properties that limit access of biological fluids to the underlying device or metal electrode. Parylene C is a material that has been used in FDA-approved implantable devices. Considered a biocompatible polymer with barrier properties, it has been used as a substrate, insulation or an encapsulation for neural implant technology. Recently, it has been suggested that a bilayer coating of Parylene C on top of atomic-layer-deposited Al2O3 would provide enhanced barrier properties. Here we report a comprehensive study to examine the mean time to failure of Parylene C and Al2O3-Parylene C coated devices using accelerated lifetime testing. Samples were tested at 60°C for up to 3 months while performing electrochemical measurements to characterize the integrity of the insulation. The mean time to failure for Al2O3-Parylene C was 4.6 times longer than Parylene C coated samples. In addition, based on modeling of the data using electrical circuit equivalents, we show here that there are two main modes of failure. Our results suggest that failure of the insulating layer is due to pore formation or blistering as well as thinning of the coating over time. The enhanced barrier properties of the bilayer Al2O3-Parylene C over Parylene C makes it a promising candidate as an encapsulating neural interface. Copyright © 2013 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  8. High serotonin levels during brain development alter the structural input-output connectivity of neural networks in the rat somatosensory layer IV

    Directory of Open Access Journals (Sweden)

    Stéphanie eMiceli

    2013-06-01

    Full Text Available Homeostatic regulation of serotonin (5-HT concentration is critical for normal topographical organization and development of thalamocortical (TC afferent circuits. Down-regulation of the serotonin transporter (SERT and the consequent impaired reuptake of 5-HT at the synapse, results in a reduced terminal branching of developing TC afferents within the primary somatosensory cortex (S1. Despite the presence of multiple genetic models, the effect of high extracellular 5-HT levels on the structure and function of developing intracortical neural networks is far from being understood. Here, using juvenile SERT knockout (SERT-/- rats we investigated, in vitro, the effect of increased 5-HT levels on the structural organization of (i the thalamocortical projections of the ventroposteromedial thalamic nucleus towards S1, (ii the general barrel-field pattern and (iii the electrophysiological and morphological properties of the excitatory cell population in layer IV of S1 (spiny stellate and pyramidal cells. Our results confirmed previous findings that high levels of 5-HT during development lead to a reduction of the topographical precision of TCA projections towards the barrel cortex. Also, the barrel pattern was altered but not abolished in SERT-/- rats. In layer IV, both excitatory spiny stellate and pyramidal cells showed a significantly reduced intracolumnar organization of their axonal projections. In addition, the layer IV spiny stellate cells gave rise to a prominent projection towards the infragranular layer Vb. Our findings point to a structural and functional reorganization, of TCAs, as well as early stage intracortical microcircuitry, following the disruption of 5-HT reuptake during critical developmental periods. The increased projection pattern of the layer IV neurons suggests that the intracortical network changes are not limited to the main entry layer IV but may also affect the subsequent stages of the canonical circuits of the barrel

  9. Biological engineering applications of feedforward neural networks designed and parameterized by genetic algorithms.

    Science.gov (United States)

    Ferentinos, Konstantinos P

    2005-09-01

    Two neural network (NN) applications in the field of biological engineering are developed, designed and parameterized by an evolutionary method based on the evolutionary process of genetic algorithms. The developed systems are a fault detection NN model and a predictive modeling NN system. An indirect or 'weak specification' representation was used for the encoding of NN topologies and training parameters into genes of the genetic algorithm (GA). Some a priori knowledge of the demands in network topology for specific application cases is required by this approach, so that the infinite search space of the problem is limited to some reasonable degree. Both one-hidden-layer and two-hidden-layer network architectures were explored by the GA. Except for the network architecture, each gene of the GA also encoded the type of activation functions in both hidden and output nodes of the NN and the type of minimization algorithm that was used by the backpropagation algorithm for the training of the NN. Both models achieved satisfactory performance, while the GA system proved to be a powerful tool that can successfully replace the problematic trial-and-error approach that is usually used for these tasks.

  10. Multi-Layer Artificial Neural Networks Based MPPT-Pitch Angle Control of a Tidal Stream Generator

    Directory of Open Access Journals (Sweden)

    Khaoula Ghefiri

    2018-04-01

    Full Text Available Artificial intelligence technologies are widely investigated as a promising technique for tackling complex and ill-defined problems. In this context, artificial neural networks methodology has been considered as an effective tool to handle renewable energy systems. Thereby, the use of Tidal Stream Generator (TSG systems aim to provide clean and reliable electrical power. However, the power captured from tidal currents is highly disturbed due to the swell effect and the periodicity of the tidal current phenomenon. In order to improve the quality of the generated power, this paper focuses on the power smoothing control. For this purpose, a novel Artificial Neural Network (ANN is investigated and implemented to provide the proper rotational speed reference and the blade pitch angle. The ANN supervisor adequately switches the system in variable speed and power limitation modes. In order to recover the maximum power from the tides, a rotational speed control is applied to the rotor side converter following the Maximum Power Point Tracking (MPPT generated from the ANN block. In case of strong tidal currents, a pitch angle control is set based on the ANN approach to keep the system operating within safe limits. Two study cases were performed to test the performance of the output power. Simulation results demonstrate that the implemented control strategies achieve a smoothed generated power in the case of swell disturbances.

  11. Recruitment and Consolidation of Cell Assemblies for Words by Way of Hebbian Learning and Competition in a Multi-Layer Neural Network.

    Science.gov (United States)

    Garagnani, Max; Wennekers, Thomas; Pulvermüller, Friedemann

    2009-06-01

    Current cognitive theories postulate either localist representations of knowledge or fully overlapping, distributed ones. We use a connectionist model that closely replicates known anatomical properties of the cerebral cortex and neurophysiological principles to show that Hebbian learning in a multi-layer neural network leads to memory traces (cell assemblies) that are both distributed and anatomically distinct. Taking the example of word learning based on action-perception correlation, we document mechanisms underlying the emergence of these assemblies, especially (i) the recruitment of neurons and consolidation of connections defining the kernel of the assembly along with (ii) the pruning of the cell assembly's halo (consisting of very weakly connected cells). We found that, whereas a learning rule mapping covariance led to significant overlap and merging of assemblies, a neurobiologically grounded synaptic plasticity rule with fixed LTP/LTD thresholds produced minimal overlap and prevented merging, exhibiting competitive learning behaviour. Our results are discussed in light of current theories of language and memory. As simulations with neurobiologically realistic neural networks demonstrate here spontaneous emergence of lexical representations that are both cortically dispersed and anatomically distinct, both localist and distributed cognitive accounts receive partial support.

  12. Blastema cells derived from New Zealand white rabbit's pinna carry stemness properties as shown by differentiation into insulin producing, neural, and osteogenic lineages representing three embryonic germ layers.

    Science.gov (United States)

    Saeinasab, Morvarid; Matin, Maryam M; Rassouli, Fatemeh B; Bahrami, Ahmad Reza

    2016-05-01

    Stem cells (SCs) are known as undifferentiated cells with self-renewal and differentiation capacities. Regeneration is a phenomenon that occurs in a limited number of animals after injury, during which blastema tissue is formed. It has been hypothesized that upon injury, the dedifferentiation of surrounding tissues leads into the appearance of cells with SC characteristics. In present study, stem-like cells (SLCs) were obtained from regenerating tissue of New Zealand white rabbit's pinna and their stemness properties were examined by their capacity to differentiate toward insulin producing cells (IPCs), as well as neural and osteogenic lineages. Differentiation was induced by culture of SLCs in defined medium, and cell fates were monitored by specific staining, RT-PCR and flow cytometry assays. Our results revealed that dithizone positive cells, which represent IPCs, and islet-like structures appeared 1 week after induction of SLCs, and this observation was confirmed by the elevated expression of Ins, Pax6 and Glut4 at mRNA level. Furthermore, SLCs were able to express neural markers as early as 1 week after retinoic acid treatment. Finally, SLCs were able to differentiate into osteogenic lineage, as confirmed by Alizarin Red S staining and RT-PCR studies. In conclusion, SLCs, which could successfully differentiate into cells derived from all three germ layers, can be considered as a valuable model to study developmental biology and regenerative medicine.

  13. Modelling the Flow Stress of Alloy 316L using a Multi-Layered Feed Forward Neural Network with Bayesian Regularization

    Science.gov (United States)

    Abiriand Bhekisipho Twala, Olufunminiyi

    2017-08-01

    In this paper, a multilayer feedforward neural network with Bayesian regularization constitutive model is developed for alloy 316L during high strain rate and high temperature plastic deformation. The input variables are strain rate, temperature and strain while the output value is the flow stress of the material. The results show that the use of Bayesian regularized technique reduces the potential of overfitting and overtraining. The prediction quality of the model is thereby improved. The model predictions are in good agreement with experimental measurements. The measurement data used for the network training and model comparison were taken from relevant literature. The developed model is robust as it can be generalized to deformation conditions slightly below or above the training dataset.

  14. Morphological neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Ritter, G.X.; Sussner, P. [Univ. of Florida, Gainesville, FL (United States)

    1996-12-31

    The theory of artificial neural networks has been successfully applied to a wide variety of pattern recognition problems. In this theory, the first step in computing the next state of a neuron or in performing the next layer neural network computation involves the linear operation of multiplying neural values by their synaptic strengths and adding the results. Thresholding usually follows the linear operation in order to provide for nonlinearity of the network. In this paper we introduce a novel class of neural networks, called morphological neural networks, in which the operations of multiplication and addition are replaced by addition and maximum (or minimum), respectively. By taking the maximum (or minimum) of sums instead of the sum of products, morphological network computation is nonlinear before thresholding. As a consequence, the properties of morphological neural networks are drastically different than those of traditional neural network models. In this paper we consider some of these differences and provide some particular examples of morphological neural network.

  15. Neural Networks

    Directory of Open Access Journals (Sweden)

    Schwindling Jerome

    2010-04-01

    Full Text Available This course presents an overview of the concepts of the neural networks and their aplication in the framework of High energy physics analyses. After a brief introduction on the concept of neural networks, the concept is explained in the frame of neuro-biology, introducing the concept of multi-layer perceptron, learning and their use as data classifer. The concept is then presented in a second part using in more details the mathematical approach focussing on typical use cases faced in particle physics. Finally, the last part presents the best way to use such statistical tools in view of event classifers, putting the emphasis on the setup of the multi-layer perceptron. The full article (15 p. corresponding to this lecture is written in french and is provided in the proceedings of the book SOS 2008.

  16. Real-Time Analysis of Online Product Reviews by Means of Multi-Layer Feed-Forward Neural Networks

    Directory of Open Access Journals (Sweden)

    Reinhold Decker

    2014-11-01

    Full Text Available In the recent past, the quantitative analysis of online product reviews (OPRs has become a popular manifestation of marketing intelligence activities focusing on products that are frequently subject to electronic word-of-mouth (eWOM. Typical elements of OPRs are overall star ratings, product at- tribute scores, recommendations, pros and cons, and free texts. The first three elements are of pa r- ticular interest because they provide an aggregate view of reviewers’ opinions about the products of interest. However, the significance of individual product attributes in the overall evaluation pro c- ess  can  vary  in  the  course  of  time.  Accordingly,  ad  hoc  analyses  of  OPRs  that  have  been downloaded at a certain point in time are of limited value for dynamic eWOM monitoring because of their snapshot character. On the other hand, opinion platforms can increase the meaningfulness of the OPRs posted there and, therewith, the usefulness of the platform as a whole, by directing eWOM activities to those product attributes that really matter at present. This paper therefore in- troduces a neural network-based approach that allows the dynamic tracking of the influence the posted scores of product attributes have on the overall star ratings of the concerning products. By using an elasticity measure, this approach supports the identification of those attributes that tend to lose or gain significance in the product evaluation process over time. The usability of this ap- proach is demonstrated using real OPR data on digital cameras and hotels.

  17. Neural network modelling of planform geometry of headland-bay beaches

    Science.gov (United States)

    Iglesias, G.; López, I.; Castro, A.; Carballo, R.

    2009-02-01

    The shoreline of beaches in the lee of coastal salients or man-made structures, usually known as headland-bay beaches, has a distinctive curvature; wave fronts curve as a result of wave diffraction at the headland and in turn cause the shoreline to bend. The ensuing curved planform is of great interest both as a peculiar landform and in the context of engineering projects in which it is necessary to predict how a coastal structure will affect the sandy shoreline in its lee. A number of empirical models have been put forward, each based on a specific equation. A novel approach, based on the application of artificial neural networks, is presented in this work. Unlike the conventional method, no particular equation of the planform is embedded in the model. Instead, it is the model itself that learns about the problem from a series of examples of headland-bay beaches (the training set) and thereafter applies this self-acquired knowledge to other cases (the test set) for validation. Twenty-three headland-bay beaches from around the world were selected, of which sixteen and seven make up the training and test sets, respectively. As there is no well-developed theory for deciding upon the most convenient neural network architecture to deal with a particular data set, an experimental study was conducted in which ten different architectures with one and two hidden neuron layers and five training algorithms - 50 different options combining network architecture and training algorithm - were compared. Each of these options was implemented, trained and tested in order to find the best-performing approach for modelling the planform of headland-bay beaches. Finally, the selected neural network model was compared with a state-of-the-art planform model and was shown to outperform it.

  18. Neural Networks

    International Nuclear Information System (INIS)

    Smith, Patrick I.

    2003-01-01

    information [2]. Each one of these cells acts as a simple processor. When individual cells interact with one another, the complex abilities of the brain are made possible. In neural networks, the input or data are processed by a propagation function that adds up the values of all the incoming data. The ending value is then compared with a threshold or specific value. The resulting value must exceed the activation function value in order to become output. The activation function is a mathematical function that a neuron uses to produce an output referring to its input value. [8] Figure 1 depicts this process. Neural networks usually have three components an input, a hidden, and an output. These layers create the end result of the neural network. A real world example is a child associating the word dog with a picture. The child says dog and simultaneously looks a picture of a dog. The input is the spoken word ''dog'', the hidden is the brain processing, and the output will be the category of the word dog based on the picture. This illustration describes how a neural network functions

  19. Prediction of beta-turns and beta-turn types by a novel bidirectional Elman-type recurrent neural network with multiple output layers (MOLEBRNN).

    Science.gov (United States)

    Kirschner, Andreas; Frishman, Dmitrij

    2008-10-01

    Prediction of beta-turns from amino acid sequences has long been recognized as an important problem in structural bioinformatics due to their frequent occurrence as well as their structural and functional significance. Because various structural features of proteins are intercorrelated, secondary structure information has been often employed as an additional input for machine learning algorithms while predicting beta-turns. Here we present a novel bidirectional Elman-type recurrent neural network with multiple output layers (MOLEBRNN) capable of predicting multiple mutually dependent structural motifs and demonstrate its efficiency in recognizing three aspects of protein structure: beta-turns, beta-turn types, and secondary structure. The advantage of our method compared to other predictors is that it does not require any external input except for sequence profiles because interdependencies between different structural features are taken into account implicitly during the learning process. In a sevenfold cross-validation experiment on a standard test dataset our method exhibits the total prediction accuracy of 77.9% and the Mathew's Correlation Coefficient of 0.45, the highest performance reported so far. It also outperforms other known methods in delineating individual turn types. We demonstrate how simultaneous prediction of multiple targets influences prediction performance on single targets. The MOLEBRNN presented here is a generic method applicable in a variety of research fields where multiple mutually depending target classes need to be predicted. http://webclu.bio.wzw.tum.de/predator-web/.

  20. Predicting methionine and lysine contents in soybean meal and fish meal using a group method of data handling-type neural network

    Energy Technology Data Exchange (ETDEWEB)

    Mottaghitalab, M.; Nikkhah, N.; Darmani-Kuhi, H.; López, S.; France, J.

    2015-07-01

    Artificial neural network models offer an alternative to linear regression analysis for predicting the amino acid content of feeds from their chemical composition. A group method of data handling-type neural network (GMDH-type NN), with an evolutionary method of genetic algorithm, was used to predict methionine (Met) and lysine (Lys) contents of soybean meal (SBM) and fish meal (FM) from their proximate analyses (i.e. crude protein, crude fat, crude fibre, ash and moisture). A data set with 119 data lines for Met and 116 lines for Lys was used to develop GMDH-type NN models with two hidden layers. The data lines were divided into two groups to produce training and validation sets. The data sets were imported into the GEvoM software for training the networks. The predictive capability of the constructed models was evaluated by their abilities to estimate the validation data sets accurately. A quantitative examination of goodness of fit for the predictive models was made using a number of precision, concordance and bias statistics. The statistical performance of the models developed revealed close agreement between observed and predicted Met and Lys contents for SBM and FM. The results of this study clearly illustrate the validity of GMDH-type NN models to estimate accurately the amino acid content of poultry feed ingredients from their chemical composition . (Author)

  1. Natural and Unnatural Oil Layers on the Surface of the Gulf of Mexico Detected and Quantified in Synthetic Aperture RADAR Images with Texture Classifying Neural Network Algorithms

    Science.gov (United States)

    MacDonald, I. R.; Garcia-Pineda, O. G.; Morey, S. L.; Huffer, F.

    2011-12-01

    Effervescent hydrocarbons rise naturally from hydrocarbon seeps in the Gulf of Mexico and reach the ocean surface. This oil forms thin (~0.1 μm) layers that enhance specular reflectivity and have been widely used to quantify the abundance and distribution of natural seeps using synthetic aperture radar (SAR). An analogous process occurred at a vastly greater scale for oil and gas discharged from BP's Macondo well blowout. SAR data allow direct comparison of the areas of the ocean surface covered by oil from natural sources and the discharge. We used a texture classifying neural network algorithm to quantify the areas of naturally occurring oil-covered water in 176 SAR image collections from the Gulf of Mexico obtained between May 1997 and November 2007, prior to the blowout. Separately we also analyzed 36 SAR images collections obtained between 26 April and 30 July, 2010 while the discharged oil was visible in the Gulf of Mexico. For the naturally occurring oil, we removed pollution events and transient oceanographic effects by including only the reflectance anomalies that that recurred in the same locality over multiple images. We measured the area of oil layers in a grid of 10x10 km cells covering the entire Gulf of Mexico. Floating oil layers were observed in only a fraction of the total Gulf area amounting to 1.22x10^5 km^2. In a bootstrap sample of 2000 replications, the combined average area of these layers was 7.80x10^2 km^2 (sd 86.03). For a regional comparison, we divided the Gulf of Mexico into four quadrates along 90° W longitude, and 25° N latitude. The NE quadrate, where the BP discharge occurred, received on average 7.0% of the total natural seepage in the Gulf of Mexico (5.24 x10^2 km^2, sd 21.99); the NW quadrate received on average 68.0% of this total (5.30 x10^2 km^2, sd 69.67). The BP blowout occurred in the NE quadrate of the Gulf of Mexico; discharged oil that reached the surface drifted over a large area north of 25° N. Performing a

  2. The potential of computer vision, optical backscattering parameters and artificial neural network modelling in monitoring the shrinkage of sweet potato (Ipomoea batatas L.) during drying.

    Science.gov (United States)

    Onwude, Daniel I; Hashim, Norhashila; Abdan, Khalina; Janius, Rimfiel; Chen, Guangnan

    2018-03-01

    Drying is a method used to preserve agricultural crops. During the drying of products with high moisture content, structural changes in shape, volume, area, density and porosity occur. These changes could affect the final quality of dried product and also the effective design of drying equipment. Therefore, this study investigated a novel approach in monitoring and predicting the shrinkage of sweet potato during drying. Drying experiments were conducted at temperatures of 50-70 °C and samples thicknesses of 2-6 mm. The volume and surface area obtained from camera vision, and the perimeter and illuminated area from backscattered optical images were analysed and used to evaluate the shrinkage of sweet potato during drying. The relationship between dimensionless moisture content and shrinkage of sweet potato in terms of volume, surface area, perimeter and illuminated area was found to be linearly correlated. The results also demonstrated that the shrinkage of sweet potato based on computer vision and backscattered optical parameters is affected by the product thickness, drying temperature and drying time. A multilayer perceptron (MLP) artificial neural network with input layer containing three cells, two hidden layers (18 neurons), and five cells for output layer, was used to develop a model that can monitor, control and predict the shrinkage parameters and moisture content of sweet potato slices under different drying conditions. The developed ANN model satisfactorily predicted the shrinkage and dimensionless moisture content of sweet potato with correlation coefficient greater than 0.95. Combined computer vision, laser light backscattering imaging and artificial neural network can be used as a non-destructive, rapid and easily adaptable technique for in-line monitoring, predicting and controlling the shrinkage and moisture changes of food and agricultural crops during drying. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  3. Empirical modeling of a dewaxing system of lubricant oil using Artificial Neural Network (ANN); Modelagem empirica de um sistema de desparafinacao de oleo lubrificante usando redes neurais artificiais

    Energy Technology Data Exchange (ETDEWEB)

    Fontes, Cristiano Hora de Oliveira; Medeiros, Ana Claudia Gondim de; Silva, Marcone Lopes; Neves, Sergio Bello; Carvalho, Luciene Santos de; Guimaraes, Paulo Roberto Britto; Pereira, Magnus; Vianna, Regina Ferreira [Universidade Salvador (UNIFACS), Salvador, BA (Brazil). Dept. de Engenharia e Arquitetura]. E-mail: paulorbg@unifacs.br; Santos, Nilza Maria Querino dos [PETROBRAS S.A., Rio de Janeiro, RJ (Brazil)]. E-mail: nilzaq@petrobras.com.br

    2003-07-01

    The MIBK (m-i-b-ketone) dewaxing unit, located at the Landulpho Alves refinery, allows two different operating modes: dewaxing ND oil removal. The former is comprised of an oil-wax separation process, which generates a wax stream with 2 - 5% oil. The latter involves the reprocessing of the wax stream to reduce its oil content. Both involve a two-stage filtration process (primary and secondary) with rotative filters. The general aim of this research is to develop empirical models to predict variables, for both unit-operating modes, to be used in control algorithms, since many data are not available during normal plant operation and therefore need to be estimated. Studies have suggested that the oil content is an essential variable to develop reliable empirical models and this work is concerned with the development of an empirical model for the prediction of the oil content in the wax stream leaving the primary filters. The model is based on a feed forward Artificial Neural Network (ANN) and tests with one and two hidden layers indicate very good agreement between experimental and predicted values. (author)

  4. Modeling of yield and environmental impact categories in tea processing units based on artificial neural networks.

    Science.gov (United States)

    Khanali, Majid; Mobli, Hossein; Hosseinzadeh-Bandbafha, Homa

    2017-12-01

    In this study, an artificial neural network (ANN) model was developed for predicting the yield and life cycle environmental impacts based on energy inputs required in processing of black tea, green tea, and oolong tea in Guilan province of Iran. A life cycle assessment (LCA) approach was used to investigate the environmental impact categories of processed tea based on the cradle to gate approach, i.e., from production of input materials using raw materials to the gate of tea processing units, i.e., packaged tea. Thus, all the tea processing operations such as withering, rolling, fermentation, drying, and packaging were considered in the analysis. The initial data were obtained from tea processing units while the required data about the background system was extracted from the EcoInvent 2.2 database. LCA results indicated that diesel fuel and corrugated paper box used in drying and packaging operations, respectively, were the main hotspots. Black tea processing unit caused the highest pollution among the three processing units. Three feed-forward back-propagation ANN models based on Levenberg-Marquardt training algorithm with two hidden layers accompanied by sigmoid activation functions and a linear transfer function in output layer, were applied for three types of processed tea. The neural networks were developed based on energy equivalents of eight different input parameters (energy equivalents of fresh tea leaves, human labor, diesel fuel, electricity, adhesive, carton, corrugated paper box, and transportation) and 11 output parameters (yield, global warming, abiotic depletion, acidification, eutrophication, ozone layer depletion, human toxicity, freshwater aquatic ecotoxicity, marine aquatic ecotoxicity, terrestrial ecotoxicity, and photochemical oxidation). The results showed that the developed ANN models with R 2 values in the range of 0.878 to 0.990 had excellent performance in predicting all the output variables based on inputs. Energy consumption for

  5. Comprehensive Forecast of Urban Water-Energy Demand Based on a Neural Network Model

    Directory of Open Access Journals (Sweden)

    Ziyi Yin

    2018-03-01

    Full Text Available Water-energy nexus has been a popular topic of rese arch in recent years. The relationships between the demand for water resources and energy are intense and closely connected in urban areas. The primary, secondary, and tertiary industry gross domestic product (GDP, the total population, the urban population, annual precipitation, agricultural and industrial water consumption, tap water supply, the total discharge of industrial wastewater, the daily sewage treatment capacity, total and domestic electricity consumption, and the consumption of coal in industrial enterprises above the designed size were chosen as input indicators. A feedforward artificial neural network model (ANN based on a back-propagation algorithm with two hidden layers was constructed to combine urban water resources with energy demand. This model used historical data from 1991 to 2016 from Wuxi City, eastern China. Furthermore, a multiple linear regression model (MLR was introduced for comparison with the ANN. The results show the following: (a The mean relative error values of the forecast and historical urban water-energy demands are 1.58 % and 2.71%, respectively; (b The predicted water-energy demand value for 2020 is 4.843 billion cubic meters and 47.561 million tons of standard coal equivalent; (c The predicted water-energy demand value in the year 2030 is 5.887 billion cubic meters and 60.355 million tons of standard coal equivalent; (d Compared with the MLR, the ANN performed better in fitting training data, which achieved a more satisfactory accuracy and may provide a reference for urban water-energy supply planning decisions.

  6. Predicting octane number using nuclear magnetic resonance spectroscopy and artificial neural networks

    KAUST Repository

    Abdul Jameel, Abdul Gani

    2018-04-17

    Machine learning algorithms are attracting significant interest for predicting complex chemical phenomenon. In this work, a model to predict research octane number (RON) and motor octane number (MON) of pure hydrocarbons, hydrocarbon-ethanol blends and gasoline-ethanol blends has been developed using artificial neural networks (ANN) and molecular parameters from 1H nuclear Magnetic Resonance (NMR) spectroscopy. RON and MON of 128 pure hydrocarbons, 123 hydrocarbon-ethanol blends of known composition and 30 FACE (fuels for advanced combustion engines) gasoline-ethanol blends were utilized as a dataset to develop the ANN model. The effect of weight % of seven functional groups including paraffinic CH3 groups, paraffinic CH2 groups, paraffinic CH groups, olefinic -CH=CH2 groups, naphthenic CH-CH2 groups, aromatic C-CH groups and ethanolic OH groups on RON and MON was studied. The effect of branching (i.e., methyl substitution), denoted by a parameter termed as branching index (BI), and molecular weight (MW) were included as inputs along with the seven functional groups to predict RON and MON. The topology of the developed ANN models for RON (9-540-314-1) and MON (9-340-603-1) have two hidden layers and a large number of nodes, and was validated against experimentally measured RON and MON of pure hydrocarbons, hydrocarbon-ethanol and gasoline-ethanol blends; a good correlation (R2=0.99) between the predicted and the experimental data was obtained. The average error of prediction for both RON and MON was found to be 1.2 which is close to the range of experimental uncertainty. This shows that the functional groups in a molecule or fuel can be used to predict its ON, and the complex relationship between them can be captured by tools like ANN.

  7. Predicting octane number using nuclear magnetic resonance spectroscopy and artificial neural networks

    KAUST Repository

    Abdul Jameel, Abdul Gani; Oudenhoven, Vincent Van; Emwas, Abdul-Hamid M.; Sarathy, Mani

    2018-01-01

    Machine learning algorithms are attracting significant interest for predicting complex chemical phenomenon. In this work, a model to predict research octane number (RON) and motor octane number (MON) of pure hydrocarbons, hydrocarbon-ethanol blends and gasoline-ethanol blends has been developed using artificial neural networks (ANN) and molecular parameters from 1H nuclear Magnetic Resonance (NMR) spectroscopy. RON and MON of 128 pure hydrocarbons, 123 hydrocarbon-ethanol blends of known composition and 30 FACE (fuels for advanced combustion engines) gasoline-ethanol blends were utilized as a dataset to develop the ANN model. The effect of weight % of seven functional groups including paraffinic CH3 groups, paraffinic CH2 groups, paraffinic CH groups, olefinic -CH=CH2 groups, naphthenic CH-CH2 groups, aromatic C-CH groups and ethanolic OH groups on RON and MON was studied. The effect of branching (i.e., methyl substitution), denoted by a parameter termed as branching index (BI), and molecular weight (MW) were included as inputs along with the seven functional groups to predict RON and MON. The topology of the developed ANN models for RON (9-540-314-1) and MON (9-340-603-1) have two hidden layers and a large number of nodes, and was validated against experimentally measured RON and MON of pure hydrocarbons, hydrocarbon-ethanol and gasoline-ethanol blends; a good correlation (R2=0.99) between the predicted and the experimental data was obtained. The average error of prediction for both RON and MON was found to be 1.2 which is close to the range of experimental uncertainty. This shows that the functional groups in a molecule or fuel can be used to predict its ON, and the complex relationship between them can be captured by tools like ANN.

  8. Neural Networks for Non-linear Control

    DEFF Research Database (Denmark)

    Sørensen, O.

    1994-01-01

    This paper describes how a neural network, structured as a Multi Layer Perceptron, is trained to predict, simulate and control a non-linear process.......This paper describes how a neural network, structured as a Multi Layer Perceptron, is trained to predict, simulate and control a non-linear process....

  9. A clinical decision support system using multilayer perceptron neural network to assess well being in diabetes.

    Science.gov (United States)

    Narasingarao, M R; Manda, R; Sridhar, G R; Madhu, K; Rao, A A

    2009-02-01

    is 1 and the number of units in the hidden layer are 6, the normalized system error was 470.57. With input samples of 100, 150 and 200, keeping the other variables constant, the normalized system error was 419.61, 359.67 and 332.32 respectively. Similar values are found for the normalized system error when the number of units in the hidden layer have been increased to 7, 8 and 9 respectively. With two hidden layers, and with each hidden layer containing 6,7 ,8, 9, 10, 11 units for the samples 50, 100, 150, and 200, the same values of normalized system error was found. Women having weight between 40 kgs and 85kgs had higher levels of depression than men who had weight between 39kgs and 102 kgs. We have developed a prototype neural network model to predict the psychosocial well-being in diabetes, when biological or biographical variables are given as inputs. When greater data was fed to the system, the normalized system error can be reduced.

  10. Coatings of nanostructured pristine graphene-IrOx hybrids for neural electrodes: Layered stacking and the role of non-oxygenated graphene

    Energy Technology Data Exchange (ETDEWEB)

    Pérez, E. [Institut Ciència de Materials de Barcelona (ICMAB-CSIC), Campus UAB, E-08193, Bellaterra, Barcelona (Spain); Lichtenstein, M.P.; Suñol, C. [Institut d' Investigacions Biomèdiques de Barcelona (IIBB-CSIC), Institut d' Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), c/Rosselló 161, 08036 Barcelona (Spain); Casañ-Pastor, N., E-mail: nieves@icmab.es [Institut Ciència de Materials de Barcelona (ICMAB-CSIC), Campus UAB, E-08193, Bellaterra, Barcelona (Spain)

    2015-10-01

    The need to enhance charge capacity in neural stimulation-electrodes is promoting the formation of new materials and coatings. Among all the possible types of graphene, pristine graphene prepared by graphite electrochemical exfoliation, is used in this work to form a new nanostructured IrOx–graphene hybrid (IrOx–eG). Graphene is stabilized in suspension by IrOx nanoparticles without surfactants. Anodic electrodeposition results in coatings with much smaller roughness than IrOx–graphene oxide. Exfoliated pristine graphene (eG), does not electrodeposit in absence of iridium, but IrOx-nanoparticle adhesion on graphene flakes drives the process. IrOx–eG has a significantly different electronic state than graphene oxide, and different coordination for carbon. Electron diffraction shows the reflection features expected for graphene. IrOx 1–2 nm cluster/nanoparticles are oxohydroxo-species and adhere to 10 nm graphene platelets. eG induces charge storage capacity values five times larger than in pure IrOx, and if calculated per carbon atom, this enhancement is one order magnitude larger than the induced by graphene oxide. IrOx–eG coatings show optimal in vitro neural cell viability and function as cell culture substrates. The fully straightforward electrochemical exfoliation and electrodeposition constitutes a step towards the application of graphene in biomedical systems, expanding the knowledge of pristine graphene vs. graphene oxide, in bioelectrodes. - Highlights: • Pristine Graphene is incorporated in coatings as nanostructured IrOx–eG hybrid. • IrOx-nanoparticles drive the electrodeposition of graphene. • Hybrid CSC is one order of magnitude the charge capacity of IrOx. • Per carbon atom, the CSC increase is 35 times larger than for graphene oxide. • Neurons are fully functional on the coating.

  11. Neural Network Based Model of an Industrial Oil-Fired Boiler System ...

    African Journals Online (AJOL)

    A two-layer feed-forward neural network with Hyperbolic tangent sigmoid ... The neural network model when subjected to test, using the validation input data; ... Proportional Integral Derivative (PID) Controller is used to control the neural ...

  12. Artificial Neural Network Analysis of Xinhui Pericarpium Citri ...

    African Journals Online (AJOL)

    Methods: Artificial neural networks (ANN) models, including general regression neural network (GRNN) and multi-layer ... N-hexane (HPLC grade) was purchased from. Fisher Scientific. ..... Simultaneous Quantification of Seven Flavonoids in.

  13. Geospatial scenario based modelling of urban and agricultural intrusions in Ramsar wetland Deepor Beel in Northeast India using a multi-layer perceptron neural network

    Science.gov (United States)

    Mozumder, Chitrini; Tripathi, Nitin K.

    2014-10-01

    In recent decades, the world has experienced unprecedented urban growth which endangers the green environment in and around urban areas. In this work, an artificial neural network (ANN) based model is developed to predict future impacts of urban and agricultural expansion on the uplands of Deepor Beel, a Ramsar wetland in the city area of Guwahati, Assam, India, by 2025 and 2035 respectively. Simulations were carried out for three different transition rates as determined from the changes during 2001-2011, namely simple extrapolation, Markov Chain (MC), and system dynamic (SD) modelling, using projected population growth, which were further investigated based on three different zoning policies. The first zoning policy employed no restriction while the second conversion restriction zoning policy restricted urban-agricultural expansion in the Guwahati Municipal Development Authority (GMDA) proposed green belt, extending to a third zoning policy providing wetland restoration in the proposed green belt. The prediction maps were found to be greatly influenced by the transition rates and the allowed transitions from one class to another within each sub-model. The model outputs were compared with GMDA land demand as proposed for 2025 whereby the land demand as produced by MC was found to best match the projected demand. Regarding the conservation of Deepor Beel, the Landscape Development Intensity (LDI) Index revealed that wetland restoration zoning policies may reduce the impact of urban growth on a local scale, but none of the zoning policies was found to minimize the impact on a broader base. The results from this study may assist the planning and reviewing of land use allocation within Guwahati city to secure ecological sustainability of the wetlands.

  14. Neural networks

    International Nuclear Information System (INIS)

    Denby, Bruce; Lindsey, Clark; Lyons, Louis

    1992-01-01

    The 1980s saw a tremendous renewal of interest in 'neural' information processing systems, or 'artificial neural networks', among computer scientists and computational biologists studying cognition. Since then, the growth of interest in neural networks in high energy physics, fueled by the need for new information processing technologies for the next generation of high energy proton colliders, can only be described as explosive

  15. Deep Neural Yodelling

    OpenAIRE

    Pfäffli, Daniel (Autor/in)

    2018-01-01

    Yodel music differs from most other genres by exercising the transition from chest voice to falsetto with an audible glottal stop which is recognised even by laymen. Yodel often consists of a yodeller with a choir accompaniment. In Switzerland, it is differentiated between the natural yodel and yodel songs. Today's approaches to music generation with machine learning algorithms are based on neural networks, which are best described by stacked layers of neurons which are connected with neurons...

  16. Rotation Invariance Neural Network

    OpenAIRE

    Li, Shiyuan

    2017-01-01

    Rotation invariance and translation invariance have great values in image recognition tasks. In this paper, we bring a new architecture in convolutional neural network (CNN) named cyclic convolutional layer to achieve rotation invariance in 2-D symbol recognition. We can also get the position and orientation of the 2-D symbol by the network to achieve detection purpose for multiple non-overlap target. Last but not least, this architecture can achieve one-shot learning in some cases using thos...

  17. Meta-modeling of the pesticide fate model MACRO for groundwater exposure assessments using artificial neural networks

    Science.gov (United States)

    Stenemo, Fredrik; Lindahl, Anna M. L.; Gärdenäs, Annemieke; Jarvis, Nicholas

    2007-08-01

    Several simple index methods that use easily accessible data have been developed and included in decision-support systems to estimate pesticide leaching across larger areas. However, these methods often lack important process descriptions (e.g. macropore flow), which brings into question their reliability. Descriptions of macropore flow have been included in simulation models, but these are too complex and demanding for spatial applications. To resolve this dilemma, a neural network simulation meta-model of the dual-permeability macropore flow model MACRO was created for pesticide groundwater exposure assessment. The model was parameterized using pedotransfer functions that require as input the clay and sand content of the topsoil and subsoil, and the topsoil organic carbon content. The meta-model also requires the topsoil pesticide half-life and the soil organic carbon sorption coefficient as input. A fully connected feed-forward multilayer perceptron classification network with two hidden layers, linked to fully connected feed-forward multilayer perceptron neural networks with one hidden layer, trained on sub-sets of the target variable, was shown to be a suitable meta-model for the intended purpose. A Fourier amplitude sensitivity test showed that the model output (the 80th percentile average yearly pesticide concentration at 1 m depth for a 20 year simulation period) was sensitive to all input parameters. The two input parameters related to pesticide characteristics (i.e. soil organic carbon sorption coefficient and topsoil pesticide half-life) were the most influential, but texture in the topsoil was also quite important since it was assumed to control the mass exchange coefficient that regulates the strength of macropore flow. This is in contrast to models based on the advection-dispersion equation where soil texture is relatively unimportant. The use of the meta-model is exemplified with a case-study where the spatial variability of pesticide leaching is

  18. Target recognition based on convolutional neural network

    Science.gov (United States)

    Wang, Liqiang; Wang, Xin; Xi, Fubiao; Dong, Jian

    2017-11-01

    One of the important part of object target recognition is the feature extraction, which can be classified into feature extraction and automatic feature extraction. The traditional neural network is one of the automatic feature extraction methods, while it causes high possibility of over-fitting due to the global connection. The deep learning algorithm used in this paper is a hierarchical automatic feature extraction method, trained with the layer-by-layer convolutional neural network (CNN), which can extract the features from lower layers to higher layers. The features are more discriminative and it is beneficial to the object target recognition.

  19. Autonomous Navigation Apparatus With Neural Network for a Mobile Vehicle

    Science.gov (United States)

    Quraishi, Naveed (Inventor)

    1996-01-01

    An autonomous navigation system for a mobile vehicle arranged to move within an environment includes a plurality of sensors arranged on the vehicle and at least one neural network including an input layer coupled to the sensors, a hidden layer coupled to the input layer, and an output layer coupled to the hidden layer. The neural network produces output signals representing respective positions of the vehicle, such as the X coordinate, the Y coordinate, and the angular orientation of the vehicle. A plurality of patch locations within the environment are used to train the neural networks to produce the correct outputs in response to the distances sensed.

  20. Forecast of Wind Speed with a Backpropagation Artificial Neural Network in the Isthmus of Tehuantepec Region in the State of Oaxaca, Mexico

    Directory of Open Access Journals (Sweden)

    Orlando Lastres Danguillecourt

    2012-03-01

    Full Text Available Este trabajo presenta los resultados preliminares de la configuración de una red neuronal artificial (ANN, de tipo alimentación hacia adelante con el método de entrenamiento de retro-propagación para pronosticar la velocidad de viento en la región del Istmo de Tehuantepec, Oaxaca, México. La base de datos utilizada abarca los años comprendidos entre Junio 2008- Noviembre 2011, y fue suministrada por una estación meteorológica ubicada en la Universidad del Istmo campus Tehuantepec. Los experimentos se realizaron utilizando las siguientes variables: velocidad del viento, presión, temperatura y fecha. Al mismo tiempo se hicieron siete pruebas combinando estas variables, comparando su error cuadrático medio (MSE y el coeficiente de correlación r, con los datos de predicción y experimentales. En esta investigación, se propone una ANN de dos capas ocultas, para un pronóstico de 48 horas.This paper presents the preliminary results of setting up an artificial neural network (ANN of the feed forward type with the backpropagation training method for forecast wind speed in the region in the Isthmus of Tehuantepec, Oaxaca, Mexico. The database used covers the years from June 2008 - November 2011, and was supplied by a meteorological station located at the Isthmus University campus Tehuantepec. The experiments were done using the following variables: wind speed, pressure, temperature and date. At the same time were done seven tests combining these variables, comparing their mean square error (MSE and coefficient correlation r, with data the predicting and experimental. In this research, is proposed a ANN of two hidden layers, for a forecast of 48 hours.

  1. Evolvable synthetic neural system

    Science.gov (United States)

    Curtis, Steven A. (Inventor)

    2009-01-01

    An evolvable synthetic neural system includes an evolvable neural interface operably coupled to at least one neural basis function. Each neural basis function includes an evolvable neural interface operably coupled to a heuristic neural system to perform high-level functions and an autonomic neural system to perform low-level functions. In some embodiments, the evolvable synthetic neural system is operably coupled to one or more evolvable synthetic neural systems in a hierarchy.

  2. Neural Networks in Control Applications

    DEFF Research Database (Denmark)

    Sørensen, O.

    The intention of this report is to make a systematic examination of the possibilities of applying neural networks in those technical areas, which are familiar to a control engineer. In other words, the potential of neural networks in control applications is given higher priority than a detailed...... study of the networks themselves. With this end in view the following restrictions have been made: - Amongst numerous neural network structures, only the Multi Layer Perceptron (a feed-forward network) is applied. - Amongst numerous training algorithms, only four algorithms are examined, all...... in a recursive form (sample updating). The simplest is the Back Probagation Error Algorithm, and the most complex is the recursive Prediction Error Method using a Gauss-Newton search direction. - Over-fitting is often considered to be a serious problem when training neural networks. This problem is specifically...

  3. Application of ANNS in tube CHF prediction: effect on neuron number in hidden layer

    International Nuclear Information System (INIS)

    Han, L.; Shan, J.; Zhang, B.

    2004-01-01

    Prediction of the Critical Heat Flux (CHF) for upward flow of water in uniformly heated vertical round tube is studied with Artificial Neuron Networks (ANNs) method utilizing different neuron number in hidden layers. This study is based on thermal equilibrium conditions. The neuron number in hidden layers is chosen to vary from 5 to 30 with the step of 5. The effect due to the variety of the neuron number in hidden layers is analyzed. The analysis shows that the neuron number in hidden layers should be appropriate, too less will affect the prediction accuracy and too much may result in abnormal parametric trends. It is concluded that the appropriate neuron number in two hidden layers should be [15 15]. (authors)

  4. Multistability in bidirectional associative memory neural networks

    International Nuclear Information System (INIS)

    Huang Gan; Cao Jinde

    2008-01-01

    In this Letter, the multistability issue is studied for Bidirectional Associative Memory (BAM) neural networks. Based on the existence and stability analysis of the neural networks with or without delay, it is found that the 2n-dimensional networks can have 3 n equilibria and 2 n equilibria of them are locally exponentially stable, where each layer of the BAM network has n neurons. Furthermore, the results has been extended to (n+m)-dimensional BAM neural networks, where there are n and m neurons on the two layers respectively. Finally, two numerical examples are presented to illustrate the validity of our results

  5. Multistability in bidirectional associative memory neural networks

    Science.gov (United States)

    Huang, Gan; Cao, Jinde

    2008-04-01

    In this Letter, the multistability issue is studied for Bidirectional Associative Memory (BAM) neural networks. Based on the existence and stability analysis of the neural networks with or without delay, it is found that the 2 n-dimensional networks can have 3 equilibria and 2 equilibria of them are locally exponentially stable, where each layer of the BAM network has n neurons. Furthermore, the results has been extended to (n+m)-dimensional BAM neural networks, where there are n and m neurons on the two layers respectively. Finally, two numerical examples are presented to illustrate the validity of our results.

  6. Identification of Non-Linear Structures using Recurrent Neural Networks

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Nielsen, Søren R. K.; Hansen, H. I.

    Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure.......Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure....

  7. Identification of Non-Linear Structures using Recurrent Neural Networks

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Nielsen, Søren R. K.; Hansen, H. I.

    1995-01-01

    Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure.......Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure....

  8. Prediction of Daily Global Solar Radiation by Daily Temperatures and Artificial Neural Networks in Different Climates

    Directory of Open Access Journals (Sweden)

    S. I Saedi

    2018-03-01

    Full Text Available Introduction Global solar radiation is the sum of direct, diffuse, and reflected solar radiation. Weather forecasts, agricultural practices, and solar equipment development are three major fields that need proper information about solar radiation. Furthermore, sun in regarded as a huge source of renewable and clean energy which can be used in numerous applications to get rid of environmental impacts of non-renewable fossil fuels. Therefore, easy and fast estimation of daily global solar radiation would play an effective role is these affairs. Materials and Methods This study aimed at predicting the daily global solar radiation by means of artificial neural network (ANN method, based on easy-to-gain weather data i.e. daily mean, minimum and maximum temperatures. Having a variety of climates with long-term valid weather data, Washington State, located at the northwestern part of USA was chosen for this purpose. It has a total number of 19 weather stations to cover all the State climates. First, a station with the largest number of valid historical weather data (Lind was chosen to develop, validate, and test different ANN models. Three training algorithms i.e. Levenberg – Marquardt (LM, Scaled Conjugate Gradient (SCG, and Bayesian regularization (BR were tested in one and two hidden layer networks each with up to 20 neurons to derive six best architectures. R, RMSE, MAPE, and scatter plots were considered to evaluate each network in all steps. In order to investigate the generalizability of the best six models, they were tested in other Washington State weather stations. The most accurate and general models was evaluated in an Iran sample weather station which was chosen to be Mashhad. Results and Discussion The variation of MSE for the three training functions in one hidden layer models for Lind station indicated that SCG converged weights and biases in shorter time than LM, and LM did that faster than BR. It means that SCG provided the fastest

  9. Classification of Company Performance using Weighted Probabilistic Neural Network

    Science.gov (United States)

    Yasin, Hasbi; Waridi Basyiruddin Arifin, Adi; Warsito, Budi

    2018-05-01

    Classification of company performance can be judged by looking at its financial status, whether good or bad state. Classification of company performance can be achieved by some approach, either parametric or non-parametric. Neural Network is one of non-parametric methods. One of Artificial Neural Network (ANN) models is Probabilistic Neural Network (PNN). PNN consists of four layers, i.e. input layer, pattern layer, addition layer, and output layer. The distance function used is the euclidean distance and each class share the same values as their weights. In this study used PNN that has been modified on the weighting process between the pattern layer and the addition layer by involving the calculation of the mahalanobis distance. This model is called the Weighted Probabilistic Neural Network (WPNN). The results show that the company's performance modeling with the WPNN model has a very high accuracy that reaches 100%.

  10. Gas Classification Using Deep Convolutional Neural Networks

    Science.gov (United States)

    Peng, Pai; Zhao, Xiaojin; Pan, Xiaofang; Ye, Wenbin

    2018-01-01

    In this work, we propose a novel Deep Convolutional Neural Network (DCNN) tailored for gas classification. Inspired by the great success of DCNN in the field of computer vision, we designed a DCNN with up to 38 layers. In general, the proposed gas neural network, named GasNet, consists of: six convolutional blocks, each block consist of six layers; a pooling layer; and a fully-connected layer. Together, these various layers make up a powerful deep model for gas classification. Experimental results show that the proposed DCNN method is an effective technique for classifying electronic nose data. We also demonstrate that the DCNN method can provide higher classification accuracy than comparable Support Vector Machine (SVM) methods and Multiple Layer Perceptron (MLP). PMID:29316723

  11. Gas Classification Using Deep Convolutional Neural Networks.

    Science.gov (United States)

    Peng, Pai; Zhao, Xiaojin; Pan, Xiaofang; Ye, Wenbin

    2018-01-08

    In this work, we propose a novel Deep Convolutional Neural Network (DCNN) tailored for gas classification. Inspired by the great success of DCNN in the field of computer vision, we designed a DCNN with up to 38 layers. In general, the proposed gas neural network, named GasNet, consists of: six convolutional blocks, each block consist of six layers; a pooling layer; and a fully-connected layer. Together, these various layers make up a powerful deep model for gas classification. Experimental results show that the proposed DCNN method is an effective technique for classifying electronic nose data. We also demonstrate that the DCNN method can provide higher classification accuracy than comparable Support Vector Machine (SVM) methods and Multiple Layer Perceptron (MLP).

  12. neural network based model o work based model of an industrial oil

    African Journals Online (AJOL)

    eobe

    technique. g, Neural Network Model, Regression, Mean Square Error, PID controller. ... during the training processes. An additio ... used to carry out simulation studies of the mode .... A two-layer feed-forward neural network with Matlab.

  13. Desien, ConstruThe design, fabrication and evaluation of egg weighing device using capacitive sensor and neural networksction and Evaluation of Egg Weighing Device Using Capacitive Sensor and Neural Networks

    Directory of Open Access Journals (Sweden)

    S Khalili

    2015-09-01

    egg-laying day, and the second and fourth day after laying. Results and Discussion: In this study, two networks were built and evaluated. In the first series, two-layer networks and in the second series, three-layer networks were developed. In the two-layer neural networks, the number of neurons in the hidden layer was changed from 2 to 10.According to the given results for two-layer networks, two layer networks with 10 neurons offer the best results (the highest R-value and minimum RMSE and it can be chosen as the most effective two-layer network. Three-layer neural networks have been composed of two hidden layers. The number of neurons in the first hidden layer was 10 and in the second layer it was changed from 1 to 20. Between three-layer networks, the network with 7 neurons with the highest R-value and the lowest error is the most appropriate network. It is even more efficient than the two-layer network with 10 neurons. So, the most appropriate structure is 1-7-10-16 and it has been selected for calibration of the weighing device. To evaluate and assess the accuracy of the weighing machine, weights of 24 samples of fresh eggs were predicted and compared with the actual values obtained using a digital scale with the accuracy of 0.01 gr. The paired t-test has been used to compare the measured and predicted values and the Bland-Altman method has been used for charting the accordance between the measured and predicted values. Based on the findings, the difference between the measured and predicted values was observed up to 5.4 gr that is related to a very large sample. The mean absolute error is equal to 2.21 gr and the mean absolute percentage error is equal to 3.75 %. According to the findings, 95% of the actual and approximate matching range to compare the two weighing methods is between -5.3 gr and 3.36 gr. Thus, the dielectric technique may underestimate the egg weight up to 5.3 gr or it may overestimate it up to 3.36 gr more than the actual prediction

  14. Differences between otolith- and semicircular canal-activated neural circuitry in the vestibular system.

    Science.gov (United States)

    Uchino, Yoshio; Kushiro, Keisuke

    2011-12-01

    In the last two decades, we have focused on establishing a reliable technique for focal stimulation of vestibular receptors to evaluate neural connectivity. Here, we summarize the vestibular-related neuronal circuits for the vestibulo-ocular reflex, vestibulocollic reflex, and vestibulospinal reflex arcs. The focal stimulating technique also uncovered some hidden neural mechanisms. In the otolith system, we identified two hidden neural mechanisms that enhance otolith receptor sensitivity. The first is commissural inhibition, which boosts sensitivity by incorporating inputs from bilateral otolith receptors, the existence of which was in contradiction to the classical understanding of the otolith system but was observed in the utricular system. The second mechanism, cross-striolar inhibition, intensifies the sensitivity of inputs from both sides of receptive cells across the striola in a single otolith sensor. This was an entirely novel finding and is typically observed in the saccular system. We discuss the possible functional meaning of commissural and cross-striolar inhibition. Finally, our focal stimulating technique was applied to elucidate the different constructions of axonal projections from each vestibular receptor to the spinal cord. We also discuss the possible function of the unique neural connectivity observed in each vestibular receptor system. Copyright © 2011 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.

  15. Drift chamber tracking with neural networks

    International Nuclear Information System (INIS)

    Lindsey, C.S.; Denby, B.; Haggerty, H.

    1992-10-01

    We discuss drift chamber tracking with a commercial log VLSI neural network chip. Voltages proportional to the drift times in a 4-layer drift chamber were presented to the Intel ETANN chip. The network was trained to provide the intercept and slope of straight tracks traversing the chamber. The outputs were recorded and later compared off line to conventional track fits. Two types of network architectures were studied. Applications of neural network tracking to high energy physics detector triggers is discussed

  16. International Conference on Artificial Neural Networks (ICANN)

    CERN Document Server

    Mladenov, Valeri; Kasabov, Nikola; Artificial Neural Networks : Methods and Applications in Bio-/Neuroinformatics

    2015-01-01

    The book reports on the latest theories on artificial neural networks, with a special emphasis on bio-neuroinformatics methods. It includes twenty-three papers selected from among the best contributions on bio-neuroinformatics-related issues, which were presented at the International Conference on Artificial Neural Networks, held in Sofia, Bulgaria, on September 10-13, 2013 (ICANN 2013). The book covers a broad range of topics concerning the theory and applications of artificial neural networks, including recurrent neural networks, super-Turing computation and reservoir computing, double-layer vector perceptrons, nonnegative matrix factorization, bio-inspired models of cell communities, Gestalt laws, embodied theory of language understanding, saccadic gaze shifts and memory formation, and new training algorithms for Deep Boltzmann Machines, as well as dynamic neural networks and kernel machines. It also reports on new approaches to reinforcement learning, optimal control of discrete time-delay systems, new al...

  17. Genetic algorithm for neural networks optimization

    Science.gov (United States)

    Setyawati, Bina R.; Creese, Robert C.; Sahirman, Sidharta

    2004-11-01

    This paper examines the forecasting performance of multi-layer feed forward neural networks in modeling a particular foreign exchange rates, i.e. Japanese Yen/US Dollar. The effects of two learning methods, Back Propagation and Genetic Algorithm, in which the neural network topology and other parameters fixed, were investigated. The early results indicate that the application of this hybrid system seems to be well suited for the forecasting of foreign exchange rates. The Neural Networks and Genetic Algorithm were programmed using MATLAB«.

  18. Applications of neural network to numerical analyses

    International Nuclear Information System (INIS)

    Takeda, Tatsuoki; Fukuhara, Makoto; Ma, Xiao-Feng; Liaqat, Ali

    1999-01-01

    Applications of a multi-layer neural network to numerical analyses are described. We are mainly concerned with the computed tomography and the solution of differential equations. In both cases as the objective functions for the training process of the neural network we employed residuals of the integral equation or the differential equations. This is different from the conventional neural network training where sum of the squared errors of the output values is adopted as the objective function. For model problems both the methods gave satisfactory results and the methods are considered promising for some kind of problems. (author)

  19. Natural melanin composites by layer-by-layer assembly

    Science.gov (United States)

    Eom, Taesik; Shim, Bong Sub

    2015-04-01

    Melanin is an electrically conductive and biocompatible material, because their conjugated backbone structures provide conducting pathways from human skin, eyes, brain, and beyond. So there is a potential of using as materials for the neural interfaces and the implantable devices. Extracted from Sepia officinalis ink, our natural melanin was uniformly dispersed in mostly polar solvents such as water and alcohols. Then, the dispersed melanin was further fabricated to nano-thin layered composites by the layer-by-layer (LBL) assembly technique. Combined with polyvinyl alcohol (PVA), the melanin nanoparticles behave as an LBL counterpart to from finely tuned nanostructured films. The LBL process can adjust the smart performances of the composites by varying the layering conditions and sandwich thickness. We further demonstrated the melanin loading degree of stacked layers, combination nanostructures, electrical properties, and biocompatibility of the resulting composites by UV-vis spectrophotometer, scanning electron microscope (SEM), multimeter, and in-vitro cell test of PC12, respectively.

  20. FUZZY NEURAL NETWORK FOR OBJECT IDENTIFICATION ON INTEGRATED CIRCUIT LAYOUTS

    Directory of Open Access Journals (Sweden)

    A. A. Doudkin

    2015-01-01

    Full Text Available Fuzzy neural network model based on neocognitron is proposed to identify layout objects on images of topological layers of integrated circuits. Testing of the model on images of real chip layouts was showed a highеr degree of identification of the proposed neural network in comparison to base neocognitron.

  1. Artificial neural network intelligent method for prediction

    Science.gov (United States)

    Trifonov, Roumen; Yoshinov, Radoslav; Pavlova, Galya; Tsochev, Georgi

    2017-09-01

    Accounting and financial classification and prediction problems are high challenge and researchers use different methods to solve them. Methods and instruments for short time prediction of financial operations using artificial neural network are considered. The methods, used for prediction of financial data as well as the developed forecasting system with neural network are described in the paper. The architecture of a neural network used four different technical indicators, which are based on the raw data and the current day of the week is presented. The network developed is used for forecasting movement of stock prices one day ahead and consists of an input layer, one hidden layer and an output layer. The training method is algorithm with back propagation of the error. The main advantage of the developed system is self-determination of the optimal topology of neural network, due to which it becomes flexible and more precise The proposed system with neural network is universal and can be applied to various financial instruments using only basic technical indicators as input data.

  2. Learning and Generalisation in Neural Networks with Local Preprocessing

    OpenAIRE

    Kutsia, Merab

    2007-01-01

    We study learning and generalisation ability of a specific two-layer feed-forward neural network and compare its properties to that of a simple perceptron. The input patterns are mapped nonlinearly onto a hidden layer, much larger than the input layer, and this mapping is either fixed or may result from an unsupervised learning process. Such preprocessing of initially uncorrelated random patterns results in the correlated patterns in the hidden layer. The hidden-to-output mapping of the net...

  3. ESTUDIO DE SERIES TEMPORALES DE CONTAMINACIÓN AMBIENTAL MEDIANTE TÉCNICAS DE REDES NEURONALES ARTIFICIALES TIME SERIES ANALYSIS OF ATMOSPHERE POLLUTION DATA USING ARTIFICIAL NEURAL NETWORKS TECHNIQUES

    Directory of Open Access Journals (Sweden)

    Giovanni Salini Calderón

    2006-12-01

    concentrations between May and August for years between 1994 and 1996. In order to find the optimal time spacing between data and the number of values into the past necessary to forecast a future value, two standard tests were performed, Average Mutual Information (AMI and False Nearest Neighbours (FNN. The results of these tests suggest that the most convenient choice for modelling was to use 4 data with 6 hour spacing on a given day as input in order to forecast the value at 6 AM on the following day. Once the number and type of input and output variables are fixed, we implemented a forecasting model based on the neural network technique. We used a feedforward multilayer neural network and we trained it with the backpropagation algorithm. We tested networks with none, one and two hidden layers. The best model was one with one hidden layer, in contradiction with a previous study that found that minimum error was obtained with a net without hidden layer. Forecasts with the neural network are more accurate than those produced with a persistence model (the value six hours ahead is the same as the actual value.

  4. Separable explanations of neural network decisions

    DEFF Research Database (Denmark)

    Rieger, Laura

    2017-01-01

    Deep Taylor Decomposition is a method used to explain neural network decisions. When applying this method to non-dominant classifications, the resulting explanation does not reflect important features for the chosen classification. We propose that this is caused by the dense layers and propose...

  5. Learning drifting concepts with neural networks

    NARCIS (Netherlands)

    Biehl, Michael; Schwarze, Holm

    1993-01-01

    The learning of time-dependent concepts with a neural network is studied analytically and numerically. The linearly separable target rule is represented by an N-vector, whose time dependence is modelled by a random or deterministic drift process. A single-layer network is trained online using

  6. Neural Tube Defects

    Science.gov (United States)

    Neural tube defects are birth defects of the brain, spine, or spinal cord. They happen in the ... that she is pregnant. The two most common neural tube defects are spina bifida and anencephaly. In ...

  7. Neural tissue-spheres

    DEFF Research Database (Denmark)

    Andersen, Rikke K; Johansen, Mathias; Blaabjerg, Morten

    2007-01-01

    By combining new and established protocols we have developed a procedure for isolation and propagation of neural precursor cells from the forebrain subventricular zone (SVZ) of newborn rats. Small tissue blocks of the SVZ were dissected and propagated en bloc as free-floating neural tissue...... content, thus allowing experimental studies of neural precursor cells and their niche...

  8. Neural network tagging in a toy model

    International Nuclear Information System (INIS)

    Milek, Marko; Patel, Popat

    1999-01-01

    The purpose of this study is a comparison of Artificial Neural Network approach to HEP analysis against the traditional methods. A toy model used in this analysis consists of two types of particles defined by four generic properties. A number of 'events' was created according to the model using standard Monte Carlo techniques. Several fully connected, feed forward multi layered Artificial Neural Networks were trained to tag the model events. The performance of each network was compared to the standard analysis mechanisms and significant improvement was observed

  9. Human Face Recognition Using Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Răzvan-Daniel Albu

    2009-10-01

    Full Text Available In this paper, I present a novel hybrid face recognition approach based on a convolutional neural architecture, designed to robustly detect highly variable face patterns. The convolutional network extracts successively larger features in a hierarchical set of layers. With the weights of the trained neural networks there are created kernel windows used for feature extraction in a 3-stage algorithm. I present experimental results illustrating the efficiency of the proposed approach. I use a database of 796 images of 159 individuals from Reims University which contains quite a high degree of variability in expression, pose, and facial details.

  10. Classification of urine sediment based on convolution neural network

    Science.gov (United States)

    Pan, Jingjing; Jiang, Cunbo; Zhu, Tiantian

    2018-04-01

    By designing a new convolution neural network framework, this paper breaks the constraints of the original convolution neural network framework requiring large training samples and samples of the same size. Move and cropping the input images, generate the same size of the sub-graph. And then, the generated sub-graph uses the method of dropout, increasing the diversity of samples and preventing the fitting generation. Randomly select some proper subset in the sub-graphic set and ensure that the number of elements in the proper subset is same and the proper subset is not the same. The proper subsets are used as input layers for the convolution neural network. Through the convolution layer, the pooling, the full connection layer and output layer, we can obtained the classification loss rate of test set and training set. In the red blood cells, white blood cells, calcium oxalate crystallization classification experiment, the classification accuracy rate of 97% or more.

  11. Neural electrical activity and neural network growth.

    Science.gov (United States)

    Gafarov, F M

    2018-05-01

    The development of central and peripheral neural system depends in part on the emergence of the correct functional connectivity in its input and output pathways. Now it is generally accepted that molecular factors guide neurons to establish a primary scaffold that undergoes activity-dependent refinement for building a fully functional circuit. However, a number of experimental results obtained recently shows that the neuronal electrical activity plays an important role in the establishing of initial interneuronal connections. Nevertheless, these processes are rather difficult to study experimentally, due to the absence of theoretical description and quantitative parameters for estimation of the neuronal activity influence on growth in neural networks. In this work we propose a general framework for a theoretical description of the activity-dependent neural network growth. The theoretical description incorporates a closed-loop growth model in which the neural activity can affect neurite outgrowth, which in turn can affect neural activity. We carried out the detailed quantitative analysis of spatiotemporal activity patterns and studied the relationship between individual cells and the network as a whole to explore the relationship between developing connectivity and activity patterns. The model, developed in this work will allow us to develop new experimental techniques for studying and quantifying the influence of the neuronal activity on growth processes in neural networks and may lead to a novel techniques for constructing large-scale neural networks by self-organization. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. A single hidden layer feedforward network with only one neuron in the hidden layer can approximate any univariate function

    OpenAIRE

    Guliyev , Namig; Ismailov , Vugar

    2016-01-01

    The possibility of approximating a continuous function on a compact subset of the real line by a feedforward single hidden layer neural network with a sigmoidal activation function has been studied in many papers. Such networks can approximate an arbitrary continuous function provided that an unlimited number of neurons in a hidden layer is permitted. In this paper, we consider constructive approximation on any finite interval of $\\mathbb{R}$ by neural networks with only one neuron in the hid...

  13. Neural control of magnetic suspension systems

    Science.gov (United States)

    Gray, W. Steven

    1993-01-01

    The purpose of this research program is to design, build and test (in cooperation with NASA personnel from the NASA Langley Research Center) neural controllers for two different small air-gap magnetic suspension systems. The general objective of the program is to study neural network architectures for the purpose of control in an experimental setting and to demonstrate the feasibility of the concept. The specific objectives of the research program are: (1) to demonstrate through simulation and experimentation the feasibility of using neural controllers to stabilize a nonlinear magnetic suspension system; (2) to investigate through simulation and experimentation the performance of neural controllers designs under various types of parametric and nonparametric uncertainty; (3) to investigate through simulation and experimentation various types of neural architectures for real-time control with respect to performance and complexity; and (4) to benchmark in an experimental setting the performance of neural controllers against other types of existing linear and nonlinear compensator designs. To date, the first one-dimensional, small air-gap magnetic suspension system has been built, tested and delivered to the NASA Langley Research Center. The device is currently being stabilized with a digital linear phase-lead controller. The neural controller hardware is under construction. Two different neural network paradigms are under consideration, one based on hidden layer feedforward networks trained via back propagation and one based on using Gaussian radial basis functions trained by analytical methods related to stability conditions. Some advanced nonlinear control algorithms using feedback linearization and sliding mode control are in simulation studies.

  14. Application of a neural network for reflectance spectrum classification

    Science.gov (United States)

    Yang, Gefei; Gartley, Michael

    2017-05-01

    Traditional reflectance spectrum classification algorithms are based on comparing spectrum across the electromagnetic spectrum anywhere from the ultra-violet to the thermal infrared regions. These methods analyze reflectance on a pixel by pixel basis. Inspired by high performance that Convolution Neural Networks (CNN) have demonstrated in image classification, we applied a neural network to analyze directional reflectance pattern images. By using the bidirectional reflectance distribution function (BRDF) data, we can reformulate the 4-dimensional into 2 dimensions, namely incident direction × reflected direction × channels. Meanwhile, RIT's micro-DIRSIG model is utilized to simulate additional training samples for improving the robustness of the neural networks training. Unlike traditional classification by using hand-designed feature extraction with a trainable classifier, neural networks create several layers to learn a feature hierarchy from pixels to classifier and all layers are trained jointly. Hence, the our approach of utilizing the angular features are different to traditional methods utilizing spatial features. Although training processing typically has a large computational cost, simple classifiers work well when subsequently using neural network generated features. Currently, most popular neural networks such as VGG, GoogLeNet and AlexNet are trained based on RGB spatial image data. Our approach aims to build a directional reflectance spectrum based neural network to help us to understand from another perspective. At the end of this paper, we compare the difference among several classifiers and analyze the trade-off among neural networks parameters.

  15. Character Recognition Using Genetically Trained Neural Networks

    Energy Technology Data Exchange (ETDEWEB)

    Diniz, C.; Stantz, K.M.; Trahan, M.W.; Wagner, J.S.

    1998-10-01

    Computationally intelligent recognition of characters and symbols addresses a wide range of applications including foreign language translation and chemical formula identification. The combination of intelligent learning and optimization algorithms with layered neural structures offers powerful techniques for character recognition. These techniques were originally developed by Sandia National Laboratories for pattern and spectral analysis; however, their ability to optimize vast amounts of data make them ideal for character recognition. An adaptation of the Neural Network Designer soflsvare allows the user to create a neural network (NN_) trained by a genetic algorithm (GA) that correctly identifies multiple distinct characters. The initial successfid recognition of standard capital letters can be expanded to include chemical and mathematical symbols and alphabets of foreign languages, especially Arabic and Chinese. The FIN model constructed for this project uses a three layer feed-forward architecture. To facilitate the input of characters and symbols, a graphic user interface (GUI) has been developed to convert the traditional representation of each character or symbol to a bitmap. The 8 x 8 bitmap representations used for these tests are mapped onto the input nodes of the feed-forward neural network (FFNN) in a one-to-one correspondence. The input nodes feed forward into a hidden layer, and the hidden layer feeds into five output nodes correlated to possible character outcomes. During the training period the GA optimizes the weights of the NN until it can successfully recognize distinct characters. Systematic deviations from the base design test the network's range of applicability. Increasing capacity, the number of letters to be recognized, requires a nonlinear increase in the number of hidden layer neurodes. Optimal character recognition performance necessitates a minimum threshold for the number of cases when genetically training the net. And, the

  16. Application of artificial neural network and adaptive neuro-fuzzy inference system to investigate corrosion rate of zirconium-based nano-ceramic layer on galvanized steel in 3.5% NaCl solution

    International Nuclear Information System (INIS)

    Mousavifard, S.M.; Attar, M.M.; Ghanbari, A.; Dadgar, M.

    2015-01-01

    Highlights: • Film formation of Zr-based conversion coating under different conditions was investigated. • We study the effect of some parameters on anticorrosion performance of conversion coating. • Optimization of processing conditions for surface treatment of galvanized steel was obtained. • Modeling and predicting corrosion current density of treated surfaces was performed using ANN and ANFIS. - Abstract: A nano-ceramic Zr-based conversion solution was prepared and optimization of Zr concentration, pH, temperature and immersion time for the treatment of hot-dip galvanized steel (HDG) was performed. SEM microscopy was utilized to investigate the microstructure and film formation of the layer and the anticorrosion performance of conversion coating was studied using polarization test. Artificial intelligence systems (ANN and ANFIS) were applied on the data obtained from polarization test and the models for predicting corrosion current density values were attained. The outcome of these models showed proper predictability of the methods. The influence of input parameters was discussed and the optimized conditions for Zr-based conversion layer formation on the galvanized steel were obtained as follows: pH 3.8–4.5, Zr concentration of about 100 ppm, ambient temperature and immersion time of about 90 s

  17. Application of artificial neural network and adaptive neuro-fuzzy inference system to investigate corrosion rate of zirconium-based nano-ceramic layer on galvanized steel in 3.5% NaCl solution

    Energy Technology Data Exchange (ETDEWEB)

    Mousavifard, S.M. [Department of Polymer Engineering and Color Technology, Amirkabir University of Technology, Tehran (Iran, Islamic Republic of); Attar, M.M., E-mail: attar@aut.ac.ir [Department of Polymer Engineering and Color Technology, Amirkabir University of Technology, Tehran (Iran, Islamic Republic of); Ghanbari, A. [Department of Polymer Engineering and Color Technology, Amirkabir University of Technology, Tehran (Iran, Islamic Republic of); Dadgar, M. [Textile Engineering Department, Neyshabur University, Neyshabur (Iran, Islamic Republic of)

    2015-08-05

    Highlights: • Film formation of Zr-based conversion coating under different conditions was investigated. • We study the effect of some parameters on anticorrosion performance of conversion coating. • Optimization of processing conditions for surface treatment of galvanized steel was obtained. • Modeling and predicting corrosion current density of treated surfaces was performed using ANN and ANFIS. - Abstract: A nano-ceramic Zr-based conversion solution was prepared and optimization of Zr concentration, pH, temperature and immersion time for the treatment of hot-dip galvanized steel (HDG) was performed. SEM microscopy was utilized to investigate the microstructure and film formation of the layer and the anticorrosion performance of conversion coating was studied using polarization test. Artificial intelligence systems (ANN and ANFIS) were applied on the data obtained from polarization test and the models for predicting corrosion current density values were attained. The outcome of these models showed proper predictability of the methods. The influence of input parameters was discussed and the optimized conditions for Zr-based conversion layer formation on the galvanized steel were obtained as follows: pH 3.8–4.5, Zr concentration of about 100 ppm, ambient temperature and immersion time of about 90 s.

  18. Chaotic diagonal recurrent neural network

    International Nuclear Information System (INIS)

    Wang Xing-Yuan; Zhang Yi

    2012-01-01

    We propose a novel neural network based on a diagonal recurrent neural network and chaos, and its structure and learning algorithm are designed. The multilayer feedforward neural network, diagonal recurrent neural network, and chaotic diagonal recurrent neural network are used to approach the cubic symmetry map. The simulation results show that the approximation capability of the chaotic diagonal recurrent neural network is better than the other two neural networks. (interdisciplinary physics and related areas of science and technology)

  19. Application of structured support vector machine backpropagation to a convolutional neural network for human pose estimation.

    Science.gov (United States)

    Witoonchart, Peerajak; Chongstitvatana, Prabhas

    2017-08-01

    In this study, for the first time, we show how to formulate a structured support vector machine (SSVM) as two layers in a convolutional neural network, where the top layer is a loss augmented inference layer and the bottom layer is the normal convolutional layer. We show that a deformable part model can be learned with the proposed structured SVM neural network by backpropagating the error of the deformable part model to the convolutional neural network. The forward propagation calculates the loss augmented inference and the backpropagation calculates the gradient from the loss augmented inference layer to the convolutional layer. Thus, we obtain a new type of convolutional neural network called an Structured SVM convolutional neural network, which we applied to the human pose estimation problem. This new neural network can be used as the final layers in deep learning. Our method jointly learns the structural model parameters and the appearance model parameters. We implemented our method as a new layer in the existing Caffe library. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Evolvable Neural Software System

    Science.gov (United States)

    Curtis, Steven A.

    2009-01-01

    The Evolvable Neural Software System (ENSS) is composed of sets of Neural Basis Functions (NBFs), which can be totally autonomously created and removed according to the changing needs and requirements of the software system. The resulting structure is both hierarchical and self-similar in that a given set of NBFs may have a ruler NBF, which in turn communicates with other sets of NBFs. These sets of NBFs may function as nodes to a ruler node, which are also NBF constructs. In this manner, the synthetic neural system can exhibit the complexity, three-dimensional connectivity, and adaptability of biological neural systems. An added advantage of ENSS over a natural neural system is its ability to modify its core genetic code in response to environmental changes as reflected in needs and requirements. The neural system is fully adaptive and evolvable and is trainable before release. It continues to rewire itself while on the job. The NBF is a unique, bilevel intelligence neural system composed of a higher-level heuristic neural system (HNS) and a lower-level, autonomic neural system (ANS). Taken together, the HNS and the ANS give each NBF the complete capabilities of a biological neural system to match sensory inputs to actions. Another feature of the NBF is the Evolvable Neural Interface (ENI), which links the HNS and ANS. The ENI solves the interface problem between these two systems by actively adapting and evolving from a primitive initial state (a Neural Thread) to a complicated, operational ENI and successfully adapting to a training sequence of sensory input. This simulates the adaptation of a biological neural system in a developmental phase. Within the greater multi-NBF and multi-node ENSS, self-similar ENI s provide the basis for inter-NBF and inter-node connectivity.

  1. Layered materials

    Science.gov (United States)

    Johnson, David; Clarke, Simon; Wiley, John; Koumoto, Kunihito

    2014-06-01

    Layered compounds, materials with a large anisotropy to their bonding, electrical and/or magnetic properties, have been important in the development of solid state chemistry, physics and engineering applications. Layered materials were the initial test bed where chemists developed intercalation chemistry that evolved into the field of topochemical reactions where researchers are able to perform sequential steps to arrive at kinetically stable products that cannot be directly prepared by other approaches. Physicists have used layered compounds to discover and understand novel phenomena made more apparent through reduced dimensionality. The discovery of charge and spin density waves and more recently the remarkable discovery in condensed matter physics of the two-dimensional topological insulating state were discovered in two-dimensional materials. The understanding developed in two-dimensional materials enabled subsequent extension of these and other phenomena into three-dimensional materials. Layered compounds have also been used in many technologies as engineers and scientists used their unique properties to solve challenging technical problems (low temperature ion conduction for batteries, easy shear planes for lubrication in vacuum, edge decorated catalyst sites for catalytic removal of sulfur from oil, etc). The articles that are published in this issue provide an excellent overview of the spectrum of activities that are being pursued, as well as an introduction to some of the most established achievements in the field. Clusters of papers discussing thermoelectric properties, electronic structure and transport properties, growth of single two-dimensional layers, intercalation and more extensive topochemical reactions and the interleaving of two structures to form new materials highlight the breadth of current research in this area. These papers will hopefully serve as a useful guideline for the interested reader to different important aspects in this field and

  2. Feature to prototype transition in neural networks

    Science.gov (United States)

    Krotov, Dmitry; Hopfield, John

    Models of associative memory with higher order (higher than quadratic) interactions, and their relationship to neural networks used in deep learning are discussed. Associative memory is conventionally described by recurrent neural networks with dynamical convergence to stable points. Deep learning typically uses feedforward neural nets without dynamics. However, a simple duality relates these two different views when applied to problems of pattern classification. From the perspective of associative memory such models deserve attention because they make it possible to store a much larger number of memories, compared to the quadratic case. In the dual description, these models correspond to feedforward neural networks with one hidden layer and unusual activation functions transmitting the activities of the visible neurons to the hidden layer. These activation functions are rectified polynomials of a higher degree rather than the rectified linear functions used in deep learning. The network learns representations of the data in terms of features for rectified linear functions, but as the power in the activation function is increased there is a gradual shift to a prototype-based representation, the two extreme regimes of pattern recognition known in cognitive psychology. Simons Center for Systems Biology.

  3. A neural flow estimator

    DEFF Research Database (Denmark)

    Jørgensen, Ivan Harald Holger; Bogason, Gudmundur; Bruun, Erik

    1995-01-01

    This paper proposes a new way to estimate the flow in a micromechanical flow channel. A neural network is used to estimate the delay of random temperature fluctuations induced in a fluid. The design and implementation of a hardware efficient neural flow estimator is described. The system...... is implemented using switched-current technique and is capable of estimating flow in the μl/s range. The neural estimator is built around a multiplierless neural network, containing 96 synaptic weights which are updated using the LMS1-algorithm. An experimental chip has been designed that operates at 5 V...

  4. Neural Systems Laboratory

    Data.gov (United States)

    Federal Laboratory Consortium — As part of the Electrical and Computer Engineering Department and The Institute for System Research, the Neural Systems Laboratory studies the functionality of the...

  5. Reliability analysis of a consecutive r-out-of-n: F system based on neural networks

    International Nuclear Information System (INIS)

    Habib, Aziz; Alsieidi, Ragab; Youssef, Ghada

    2009-01-01

    In this paper, we present a generalized Markov reliability and fault-tolerant model, which includes the effects of permanent fault and intermittent fault for reliability evaluations based on neural network techniques. The reliability of a consecutive r-out-of-n: F system was obtained with a three-layer connected neural network represents a discrete time state reliability Markov model of the system. Such that we fed the neural network with the desired reliability of the system under design. Then we extracted the parameters of the system from the neural weights at the convergence of the neural network to the desired reliability. Finally, we obtain simulation results.

  6. Hybrid digital signal processing and neural networks applications in PWRs

    International Nuclear Information System (INIS)

    Eryurek, E.; Upadhyaya, B.R.; Kavaklioglu, K.

    1991-01-01

    Signal validation and plant subsystem tracking in power and process industries require the prediction of one or more state variables. Both heteroassociative and auotassociative neural networks were applied for characterizing relationships among sets of signals. A multi-layer neural network paradigm was applied for sensor and process monitoring in a Pressurized Water Reactor (PWR). This nonlinear interpolation technique was found to be very effective for these applications

  7. Foot Plantar Pressure Estimation Using Artificial Neural Networks

    OpenAIRE

    Xidias , Elias; Koutkalaki , Zoi; Papagiannis , Panagiotis; Papanikos , Paraskevas; Azariadis , Philip

    2015-01-01

    Part 1: Smart Products; International audience; In this paper, we present a novel approach to estimate the maximum pressure over the foot plantar surface exerted by a two-layer shoe sole for three distinct phases of the gait cycle. The proposed method is based on Artificial Neural Networks and can be utilized for the determination of the comfort that is related to the sole construction. Input parameters to the proposed neural network are the material properties and the thicknesses of the sole...

  8. Stacked Heterogeneous Neural Networks for Time Series Forecasting

    Directory of Open Access Journals (Sweden)

    Florin Leon

    2010-01-01

    Full Text Available A hybrid model for time series forecasting is proposed. It is a stacked neural network, containing one normal multilayer perceptron with bipolar sigmoid activation functions, and the other with an exponential activation function in the output layer. As shown by the case studies, the proposed stacked hybrid neural model performs well on a variety of benchmark time series. The combination of weights of the two stack components that leads to optimal performance is also studied.

  9. Universal approximation in p-mean by neural networks

    NARCIS (Netherlands)

    Burton, R.M; Dehling, H.G

    A feedforward neural net with d input neurons and with a single hidden layer of n neurons is given by [GRAPHICS] where a(j), theta(j), w(ji) is an element of R. In this paper we study the approximation of arbitrary functions f: R-d --> R by a neural net in an L-p(mu) norm for some finite measure mu

  10. A Quantum Implementation Model for Artificial Neural Networks

    OpenAIRE

    Daskin, Ammar

    2016-01-01

    The learning process for multi layered neural networks with many nodes makes heavy demands on computational resources. In some neural network models, the learning formulas, such as the Widrow-Hoff formula, do not change the eigenvectors of the weight matrix while flatting the eigenvalues. In infinity, this iterative formulas result in terms formed by the principal components of the weight matrix: i.e., the eigenvectors corresponding to the non-zero eigenvalues. In quantum computing, the phase...

  11. Convolutional over Recurrent Encoder for Neural Machine Translation

    Directory of Open Access Journals (Sweden)

    Dakwale Praveen

    2017-06-01

    Full Text Available Neural machine translation is a recently proposed approach which has shown competitive results to traditional MT approaches. Standard neural MT is an end-to-end neural network where the source sentence is encoded by a recurrent neural network (RNN called encoder and the target words are predicted using another RNN known as decoder. Recently, various models have been proposed which replace the RNN encoder with a convolutional neural network (CNN. In this paper, we propose to augment the standard RNN encoder in NMT with additional convolutional layers in order to capture wider context in the encoder output. Experiments on English to German translation demonstrate that our approach can achieve significant improvements over a standard RNN-based baseline.

  12. Investigation of efficient features for image recognition by neural networks.

    Science.gov (United States)

    Goltsev, Alexander; Gritsenko, Vladimir

    2012-04-01

    In the paper, effective and simple features for image recognition (named LiRA-features) are investigated in the task of handwritten digit recognition. Two neural network classifiers are considered-a modified 3-layer perceptron LiRA and a modular assembly neural network. A method of feature selection is proposed that analyses connection weights formed in the preliminary learning process of a neural network classifier. In the experiments using the MNIST database of handwritten digits, the feature selection procedure allows reduction of feature number (from 60 000 to 7000) preserving comparable recognition capability while accelerating computations. Experimental comparison between the LiRA perceptron and the modular assembly neural network is accomplished, which shows that recognition capability of the modular assembly neural network is somewhat better. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. Artificial neural network based approach to transmission lines protection

    International Nuclear Information System (INIS)

    Joorabian, M.

    1999-05-01

    The aim of this paper is to present and accurate fault detection technique for high speed distance protection using artificial neural networks. The feed-forward multi-layer neural network with the use of supervised learning and the common training rule of error back-propagation is chosen for this study. Information available locally at the relay point is passed to a neural network in order for an assessment of the fault location to be made. However in practice there is a large amount of information available, and a feature extraction process is required to reduce the dimensionality of the pattern vectors, whilst retaining important information that distinguishes the fault point. The choice of features is critical to the performance of the neural networks learning and operation. A significant feature in this paper is that an artificial neural network has been designed and tested to enhance the precision of the adaptive capabilities for distance protection

  14. Characterization of Radar Signals Using Neural Networks

    Science.gov (United States)

    1990-12-01

    e***e*e*eeeeeeeeeeeesseeeeeese*eee*e*e************s /* Function Name: load.input.ptterns Number: 4.1 /* Description: This function determines wether ...XSE.last.layer Number: 8.5 */ /* Description: The function determines wether to backpropate the *f /* parameter by the sigmoidal or linear update...Sigmoidal Function," Mathematics of Control, Signals and Systems, 2:303-314 (March 1989). 6. Dayhoff, Judith E. Neural Network Architectures. New York: Van

  15. Using neural networks to describe tracer correlations

    Directory of Open Access Journals (Sweden)

    D. J. Lary

    2004-01-01

    Full Text Available Neural networks are ideally suited to describe the spatial and temporal dependence of tracer-tracer correlations. The neural network performs well even in regions where the correlations are less compact and normally a family of correlation curves would be required. For example, the CH4-N2O correlation can be well described using a neural network trained with the latitude, pressure, time of year, and methane volume mixing ratio (v.m.r.. In this study a neural network using Quickprop learning and one hidden layer with eight nodes was able to reproduce the CH4-N2O correlation with a correlation coefficient between simulated and training values of 0.9995. Such an accurate representation of tracer-tracer correlations allows more use to be made of long-term datasets to constrain chemical models. Such as the dataset from the Halogen Occultation Experiment (HALOE which has continuously observed CH4  (but not N2O from 1991 till the present. The neural network Fortran code used is available for download.

  16. Weather forecasting based on hybrid neural model

    Science.gov (United States)

    Saba, Tanzila; Rehman, Amjad; AlGhamdi, Jarallah S.

    2017-11-01

    Making deductions and expectations about climate has been a challenge all through mankind's history. Challenges with exact meteorological directions assist to foresee and handle problems well in time. Different strategies have been investigated using various machine learning techniques in reported forecasting systems. Current research investigates climate as a major challenge for machine information mining and deduction. Accordingly, this paper presents a hybrid neural model (MLP and RBF) to enhance the accuracy of weather forecasting. Proposed hybrid model ensure precise forecasting due to the specialty of climate anticipating frameworks. The study concentrates on the data representing Saudi Arabia weather forecasting. The main input features employed to train individual and hybrid neural networks that include average dew point, minimum temperature, maximum temperature, mean temperature, average relative moistness, precipitation, normal wind speed, high wind speed and average cloudiness. The output layer composed of two neurons to represent rainy and dry weathers. Moreover, trial and error approach is adopted to select an appropriate number of inputs to the hybrid neural network. Correlation coefficient, RMSE and scatter index are the standard yard sticks adopted for forecast accuracy measurement. On individual standing MLP forecasting results are better than RBF, however, the proposed simplified hybrid neural model comes out with better forecasting accuracy as compared to both individual networks. Additionally, results are better than reported in the state of art, using a simple neural structure that reduces training time and complexity.

  17. Comparison of 2D and 3D neural induction methods for the generation of neural progenitor cells from human induced pluripotent stem cells

    DEFF Research Database (Denmark)

    Chandrasekaran, Abinaya; Avci, Hasan; Ochalek, Anna

    2017-01-01

    Neural progenitor cells (NPCs) from human induced pluripotent stem cells (hiPSCs) are frequently induced using 3D culture methodologies however, it is unknown whether spheroid-based (3D) neural induction is actually superior to monolayer (2D) neural induction. Our aim was to compare the efficiency......), cortical layer (TBR1, CUX1) and glial markers (SOX9, GFAP, AQP4). Electron microscopy demonstrated that both methods resulted in morphologically similar neural rosettes. However, quantification of NPCs derived from 3D neural induction exhibited an increase in the number of PAX6/NESTIN double positive cells...... the electrophysiological properties between the two induction methods. In conclusion, 3D neural induction increases the yield of PAX6+/NESTIN+ cells and gives rise to neurons with longer neurites, which might be an advantage for the production of forebrain cortical neurons, highlighting the potential of 3D neural...

  18. A Simple Quantum Neural Net with a Periodic Activation Function

    OpenAIRE

    Daskin, Ammar

    2018-01-01

    In this paper, we propose a simple neural net that requires only $O(nlog_2k)$ number of qubits and $O(nk)$ quantum gates: Here, $n$ is the number of input parameters, and $k$ is the number of weights applied to these parameters in the proposed neural net. We describe the network in terms of a quantum circuit, and then draw its equivalent classical neural net which involves $O(k^n)$ nodes in the hidden layer. Then, we show that the network uses a periodic activation function of cosine values o...

  19. Application of artificial neural network in radiographic diagnosis

    International Nuclear Information System (INIS)

    Piraino, D.; Amartur, S.; Richmond, B.; Schils, J.; Belhobek, G.

    1990-01-01

    This paper reports on an artificial neural network trained to rate the likelihood of different bone neoplasms when given a standard description of a radiograph. A three-layer back propagation algorithm was trained with descriptions of examples of bone neoplasms obtained from standard radiographic textbooks. Fifteen bone neoplasms obtained from clinical material were used as unknowns to test the trained artificial neural network. The artificial neural network correctly rated the pathologic diagnosis as the most likely diagnosis in 10 of the 15 unknown cases

  20. Neural Networks: Implementations and Applications

    OpenAIRE

    Vonk, E.; Veelenturf, L.P.J.; Jain, L.C.

    1996-01-01

    Artificial neural networks, also called neural networks, have been used successfully in many fields including engineering, science and business. This paper presents the implementation of several neural network simulators and their applications in character recognition and other engineering areas

  1. Phylogenetic convolutional neural networks in metagenomics.

    Science.gov (United States)

    Fioravanti, Diego; Giarratano, Ylenia; Maggio, Valerio; Agostinelli, Claudio; Chierici, Marco; Jurman, Giuseppe; Furlanello, Cesare

    2018-03-08

    Convolutional Neural Networks can be effectively used only when data are endowed with an intrinsic concept of neighbourhood in the input space, as is the case of pixels in images. We introduce here Ph-CNN, a novel deep learning architecture for the classification of metagenomics data based on the Convolutional Neural Networks, with the patristic distance defined on the phylogenetic tree being used as the proximity measure. The patristic distance between variables is used together with a sparsified version of MultiDimensional Scaling to embed the phylogenetic tree in a Euclidean space. Ph-CNN is tested with a domain adaptation approach on synthetic data and on a metagenomics collection of gut microbiota of 38 healthy subjects and 222 Inflammatory Bowel Disease patients, divided in 6 subclasses. Classification performance is promising when compared to classical algorithms like Support Vector Machines and Random Forest and a baseline fully connected neural network, e.g. the Multi-Layer Perceptron. Ph-CNN represents a novel deep learning approach for the classification of metagenomics data. Operatively, the algorithm has been implemented as a custom Keras layer taking care of passing to the following convolutional layer not only the data but also the ranked list of neighbourhood of each sample, thus mimicking the case of image data, transparently to the user.

  2. Critical Branching Neural Networks

    Science.gov (United States)

    Kello, Christopher T.

    2013-01-01

    It is now well-established that intrinsic variations in human neural and behavioral activity tend to exhibit scaling laws in their fluctuations and distributions. The meaning of these scaling laws is an ongoing matter of debate between isolable causes versus pervasive causes. A spiking neural network model is presented that self-tunes to critical…

  3. Consciousness and neural plasticity

    DEFF Research Database (Denmark)

    changes or to abandon the strong identity thesis altogether. Were one to pursue a theory according to which consciousness is not an epiphenomenon to brain processes, consciousness may in fact affect its own neural basis. The neural correlate of consciousness is often seen as a stable structure, that is...

  4. Collision avoidance using neural networks

    Science.gov (United States)

    Sugathan, Shilpa; Sowmya Shree, B. V.; Warrier, Mithila R.; Vidhyapathi, C. M.

    2017-11-01

    Now a days, accidents on roads are caused due to the negligence of drivers and pedestrians or due to unexpected obstacles that come into the vehicle’s path. In this paper, a model (robot) is developed to assist drivers for a smooth travel without accidents. It reacts to the real time obstacles on the four critical sides of the vehicle and takes necessary action. The sensor used for detecting the obstacle was an IR proximity sensor. A single layer perceptron neural network is used to train and test all possible combinations of sensors result by using Matlab (offline). A microcontroller (ARM Cortex-M3 LPC1768) is used to control the vehicle through the output data which is received from Matlab via serial communication. Hence, the vehicle becomes capable of reacting to any combination of real time obstacles.

  5. Application of a Shallow Neural Network to Short-Term Stock Trading

    OpenAIRE

    Madahar, Abhinav; Ma, Yuze; Patel, Kunal

    2017-01-01

    Machine learning is increasingly prevalent in stock market trading. Though neural networks have seen success in computer vision and natural language processing, they have not been as useful in stock market trading. To demonstrate the applicability of a neural network in stock trading, we made a single-layer neural network that recommends buying or selling shares of a stock by comparing the highest high of 10 consecutive days with that of the next 10 days, a process repeated for the stock's ye...

  6. Dynamics of neural cryptography.

    Science.gov (United States)

    Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido

    2007-05-01

    Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently, synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible.

  7. Dynamics of neural cryptography

    International Nuclear Information System (INIS)

    Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido

    2007-01-01

    Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently, synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible

  8. Dynamics of neural cryptography

    Science.gov (United States)

    Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido

    2007-05-01

    Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently, synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible.

  9. An artificial neural network model for periodic trajectory generation

    Science.gov (United States)

    Shankar, S.; Gander, R. E.; Wood, H. C.

    A neural network model based on biological systems was developed for potential robotic application. The model consists of three interconnected layers of artificial neurons or units: an input layer subdivided into state and plan units, an output layer, and a hidden layer between the two outer layers which serves to implement nonlinear mappings between the input and output activation vectors. Weighted connections are created between the three layers, and learning is effected by modifying these weights. Feedback connections between the output and the input state serve to make the network operate as a finite state machine. The activation vector of the plan units of the input layer emulates the supraspinal commands in biological central pattern generators in that different plan activation vectors correspond to different sequences or trajectories being recalled, even with different frequencies. Three trajectories were chosen for implementation, and learning was accomplished in 10,000 trials. The fault tolerant behavior, adaptiveness, and phase maintenance of the implemented network are discussed.

  10. ANT Advanced Neural Tool

    Energy Technology Data Exchange (ETDEWEB)

    Labrador, I.; Carrasco, R.; Martinez, L.

    1996-07-01

    This paper describes a practical introduction to the use of Artificial Neural Networks. Artificial Neural Nets are often used as an alternative to the traditional symbolic manipulation and first order logic used in Artificial Intelligence, due the high degree of difficulty to solve problems that can not be handled by programmers using algorithmic strategies. As a particular case of Neural Net a Multilayer Perception developed by programming in C language on OS9 real time operating system is presented. A detailed description about the program structure and practical use are included. Finally, several application examples that have been treated with the tool are presented, and some suggestions about hardware implementations. (Author) 15 refs.

  11. ANT Advanced Neural Tool

    International Nuclear Information System (INIS)

    Labrador, I.; Carrasco, R.; Martinez, L.

    1996-01-01

    This paper describes a practical introduction to the use of Artificial Neural Networks. Artificial Neural Nets are often used as an alternative to the traditional symbolic manipulation and first order logic used in Artificial Intelligence, due the high degree of difficulty to solve problems that can not be handled by programmers using algorithmic strategies. As a particular case of Neural Net a Multilayer Perception developed by programming in C language on OS9 real time operating system is presented. A detailed description about the program structure and practical use are included. Finally, several application examples that have been treated with the tool are presented, and some suggestions about hardware implementations. (Author) 15 refs

  12. What the success of brain imaging implies about the neural code.

    Science.gov (United States)

    Guest, Olivia; Love, Bradley C

    2017-01-19

    The success of fMRI places constraints on the nature of the neural code. The fact that researchers can infer similarities between neural representations, despite fMRI's limitations, implies that certain neural coding schemes are more likely than others. For fMRI to succeed given its low temporal and spatial resolution, the neural code must be smooth at the voxel and functional level such that similar stimuli engender similar internal representations. Through proof and simulation, we determine which coding schemes are plausible given both fMRI's successes and its limitations in measuring neural activity. Deep neural network approaches, which have been forwarded as computational accounts of the ventral stream, are consistent with the success of fMRI, though functional smoothness breaks down in the later network layers. These results have implications for the nature of the neural code and ventral stream, as well as what can be successfully investigated with fMRI.

  13. Hidden neural networks

    DEFF Research Database (Denmark)

    Krogh, Anders Stærmose; Riis, Søren Kamaric

    1999-01-01

    A general framework for hybrids of hidden Markov models (HMMs) and neural networks (NNs) called hidden neural networks (HNNs) is described. The article begins by reviewing standard HMMs and estimation by conditional maximum likelihood, which is used by the HNN. In the HNN, the usual HMM probability...... parameters are replaced by the outputs of state-specific neural networks. As opposed to many other hybrids, the HNN is normalized globally and therefore has a valid probabilistic interpretation. All parameters in the HNN are estimated simultaneously according to the discriminative conditional maximum...... likelihood criterion. The HNN can be viewed as an undirected probabilistic independence network (a graphical model), where the neural networks provide a compact representation of the clique functions. An evaluation of the HNN on the task of recognizing broad phoneme classes in the TIMIT database shows clear...

  14. Neural networks for aircraft control

    Science.gov (United States)

    Linse, Dennis

    1990-01-01

    Current research in Artificial Neural Networks indicates that networks offer some potential advantages in adaptation and fault tolerance. This research is directed at determining the possible applicability of neural networks to aircraft control. The first application will be to aircraft trim. Neural network node characteristics, network topology and operation, neural network learning and example histories using neighboring optimal control with a neural net are discussed.

  15. Neutron spectrum unfolding using neural networks

    International Nuclear Information System (INIS)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.

    2004-01-01

    An artificial neural network has been designed to obtain the neutron spectra from the Bonner spheres spectrometer's count rates. The neural network was trained using a large set of neutron spectra compiled by the International Atomic Energy Agency. These include spectra from iso- topic neutron sources, reference and operational neutron spectra obtained from accelerators and nuclear reactors. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. Re-binned spectra and UTA4 matrix were used to calculate the expected count rates in Bonner spheres spectrometer. These count rates were used as input and correspondent spectrum was used as output during neural network training. The network has 7 input nodes, 56 neurons as hidden layer and 31 neurons in the output layer. After training the network was tested with the Bonner spheres count rates produced by twelve neutron spectra. The network allows unfolding the neutron spectrum from count rates measured with Bonner spheres. Good results are obtained when testing count rates belong to neutron spectra used during training, acceptable results are obtained for count rates obtained from actual neutron fields; however the network fails when count rates belong to monoenergetic neutron sources. (Author)

  16. Active Neural Localization

    OpenAIRE

    Chaplot, Devendra Singh; Parisotto, Emilio; Salakhutdinov, Ruslan

    2018-01-01

    Localization is the problem of estimating the location of an autonomous agent from an observation and a map of the environment. Traditional methods of localization, which filter the belief based on the observations, are sub-optimal in the number of steps required, as they do not decide the actions taken by the agent. We propose "Active Neural Localizer", a fully differentiable neural network that learns to localize accurately and efficiently. The proposed model incorporates ideas of tradition...

  17. Neural cryptography with feedback.

    Science.gov (United States)

    Ruttor, Andreas; Kinzel, Wolfgang; Shacham, Lanir; Kanter, Ido

    2004-04-01

    Neural cryptography is based on a competition between attractive and repulsive stochastic forces. A feedback mechanism is added to neural cryptography which increases the repulsive forces. Using numerical simulations and an analytic approach, the probability of a successful attack is calculated for different model parameters. Scaling laws are derived which show that feedback improves the security of the system. In addition, a network with feedback generates a pseudorandom bit sequence which can be used to encrypt and decrypt a secret message.

  18. Applying Gradient Descent in Convolutional Neural Networks

    Science.gov (United States)

    Cui, Nan

    2018-04-01

    With the development of the integrated circuit and computer science, people become caring more about solving practical issues via information technologies. Along with that, a new subject called Artificial Intelligent (AI) comes up. One popular research interest of AI is about recognition algorithm. In this paper, one of the most common algorithms, Convolutional Neural Networks (CNNs) will be introduced, for image recognition. Understanding its theory and structure is of great significance for every scholar who is interested in this field. Convolution Neural Network is an artificial neural network which combines the mathematical method of convolution and neural network. The hieratical structure of CNN provides it reliable computer speed and reasonable error rate. The most significant characteristics of CNNs are feature extraction, weight sharing and dimension reduction. Meanwhile, combining with the Back Propagation (BP) mechanism and the Gradient Descent (GD) method, CNNs has the ability to self-study and in-depth learning. Basically, BP provides an opportunity for backwardfeedback for enhancing reliability and GD is used for self-training process. This paper mainly discusses the CNN and the related BP and GD algorithms, including the basic structure and function of CNN, details of each layer, the principles and features of BP and GD, and some examples in practice with a summary in the end.

  19. Cognon Neural Model Software Verification and Hardware Implementation Design

    Science.gov (United States)

    Haro Negre, Pau

    Little is known yet about how the brain can recognize arbitrary sensory patterns within milliseconds using neural spikes to communicate information between neurons. In a typical brain there are several layers of neurons, with each neuron axon connecting to ˜104 synapses of neurons in an adjacent layer. The information necessary for cognition is contained in theses synapses, which strengthen during the learning phase in response to newly presented spike patterns. Continuing on the model proposed in "Models for Neural Spike Computation and Cognition" by David H. Staelin and Carl H. Staelin, this study seeks to understand cognition from an information theoretic perspective and develop potential models for artificial implementation of cognition based on neuronal models. To do so we focus on the mathematical properties and limitations of spike-based cognition consistent with existing neurological observations. We validate the cognon model through software simulation and develop concepts for an optical hardware implementation of a network of artificial neural cognons.

  20. Numerical Analysis of Modeling Based on Improved Elman Neural Network

    Directory of Open Access Journals (Sweden)

    Shao Jie

    2014-01-01

    Full Text Available A modeling based on the improved Elman neural network (IENN is proposed to analyze the nonlinear circuits with the memory effect. The hidden layer neurons are activated by a group of Chebyshev orthogonal basis functions instead of sigmoid functions in this model. The error curves of the sum of squared error (SSE varying with the number of hidden neurons and the iteration step are studied to determine the number of the hidden layer neurons. Simulation results of the half-bridge class-D power amplifier (CDPA with two-tone signal and broadband signals as input have shown that the proposed behavioral modeling can reconstruct the system of CDPAs accurately and depict the memory effect of CDPAs well. Compared with Volterra-Laguerre (VL model, Chebyshev neural network (CNN model, and basic Elman neural network (BENN model, the proposed model has better performance.

  1. Modeling polyvinyl chloride Plasma Modification by Neural Networks

    Science.gov (United States)

    Wang, Changquan

    2018-03-01

    Neural networks model were constructed to analyze the connection between dielectric barrier discharge parameters and surface properties of material. The experiment data were generated from polyvinyl chloride plasma modification by using uniform design. Discharge voltage, discharge gas gap and treatment time were as neural network input layer parameters. The measured values of contact angle were as the output layer parameters. A nonlinear mathematical model of the surface modification for polyvinyl chloride was developed based upon the neural networks. The optimum model parameters were obtained by the simulation evaluation and error analysis. The results of the optimal model show that the predicted value is very close to the actual test value. The prediction model obtained here are useful for discharge plasma surface modification analysis.

  2. Character recognition from trajectory by recurrent spiking neural networks.

    Science.gov (United States)

    Jiangrong Shen; Kang Lin; Yueming Wang; Gang Pan

    2017-07-01

    Spiking neural networks are biologically plausible and power-efficient on neuromorphic hardware, while recurrent neural networks have been proven to be efficient on time series data. However, how to use the recurrent property to improve the performance of spiking neural networks is still a problem. This paper proposes a recurrent spiking neural network for character recognition using trajectories. In the network, a new encoding method is designed, in which varying time ranges of input streams are used in different recurrent layers. This is able to improve the generalization ability of our model compared with general encoding methods. The experiments are conducted on four groups of the character data set from University of Edinburgh. The results show that our method can achieve a higher average recognition accuracy than existing methods.

  3. System Identification, Prediction, Simulation and Control with Neural Networks

    DEFF Research Database (Denmark)

    Sørensen, O.

    1997-01-01

    a Gauss-Newton search direction is applied. 3) Amongst numerous model types, often met in control applications, only the Non-linear ARMAX (NARMAX) model, representing input/output description, is examined. A simulated example confirms that a neural network has the potential to perform excellent System......The intention of this paper is to make a systematic examination of the possibilities of applying neural networks in those technical areas, which are familiar to a control engineer. In other words, the potential of neural networks in control applications is given higher priority than a detailed...... study of the networks themselves. With this end in view the following restrictions have been made: 1) Amongst numerous neural network structures, only the Multi Layer Perceptron (a feed-forward network) is applied. 2) Amongst numerous training algorithms, only the Recursive Prediction Error Method using...

  4. Tuning Recurrent Neural Networks for Recognizing Handwritten Arabic Words

    KAUST Repository

    Qaralleh, Esam; Abandah, Gheith; Jamour, Fuad Tarek

    2013-01-01

    and sizes of the hidden layers. Large sizes are slow and small sizes are generally not accurate. Tuning the neural network size is a hard task because the design space is often large and training is often a long process. We use design of experiments

  5. A neural network based seafloor classification using acoustic backscatter

    Digital Repository Service at National Institute of Oceanography (India)

    Chakraborty, B.

    This paper presents a study results of the Artificial Neural Network (ANN) architectures [Self-Organizing Map (SOM) and Multi-Layer Perceptron (MLP)] using single beam echosounding data. The single beam echosounder, operable at 12 kHz, has been used...

  6. A neural network model for non invasive subsurface stratigraphic identification

    International Nuclear Information System (INIS)

    Sullivan, John M. Jr.; Ludwig, Reinhold; Lai Qiang

    2000-01-01

    Ground-Penetrating Radar (GRP) is a powerful tool to examine the stratigraphy below ground surface for remote sensing. Increasingly GPR has also found applications in microwave NDE as an interrogation tool to assess dielectric layers. Unfortunately, GPR data is characterized by a high degree of uncertainty and natural physical ambiguity. Robust decomposition routines are sparse for this application. We have developed a hierarchical set of neural network modules which split the task of layer profiling into consecutive stages. Successful GPR profiling of the subsurface stratigraphy is of key importance for many remote sensing applications including microwave NDE. Neural network modules were designed to accomplish the two main processing goals of recognizing the 'subsurface pattern' followed by the identification of the depths of the subsurface layers like permafrost, groundwater table, and bedrock. We used an adaptive transform technique to transform raw GPR data into a small feature vector containing the most representative and discriminative features of the signal. This information formed the input for the neural network processing units. This strategy reduced the number of required training samples for the neural network by orders of magnitude. The entire processing system was trained using the adaptive transformed feature vector inputs and tested with real measured GPR data. The successful results of this system establishes the feasibility the feasibility of delineating subsurface layering nondestructively

  7. Bringing Interpretability and Visualization with Artificial Neural Networks

    Science.gov (United States)

    Gritsenko, Andrey

    2017-01-01

    Extreme Learning Machine (ELM) is a training algorithm for Single-Layer Feed-forward Neural Network (SLFN). The difference in theory of ELM from other training algorithms is in the existence of explicitly-given solution due to the immutability of initialed weights. In practice, ELMs achieve performance similar to that of other state-of-the-art…

  8. The application of artificial neural networks to TLD dose algorithm

    International Nuclear Information System (INIS)

    Moscovitch, M.

    1997-01-01

    We review the application of feed forward neural networks to multi element thermoluminescence dosimetry (TLD) dose algorithm development. A Neural Network is an information processing method inspired by the biological nervous system. A dose algorithm based on a neural network is a fundamentally different approach from conventional algorithms, as it has the capability to learn from its own experience. The neural network algorithm is shown the expected dose values (output) associated with a given response of a multi-element dosimeter (input) many times.The algorithm, being trained that way, eventually is able to produce its own unique solution to similar (but not exactly the same) dose calculation problems. For personnel dosimetry, the output consists of the desired dose components: deep dose, shallow dose, and eye dose. The input consists of the TL data obtained from the readout of a multi-element dosimeter. For this application, a neural network architecture was developed based on the concept of functional links network (FLN). The FLN concept allowed an increase in the dimensionality of the input space and construction of a neural network without any hidden layers. This simplifies the problem and results in a relatively simple and reliable dose calculation algorithm. Overall, the neural network dose algorithm approach has been shown to significantly improve the precision and accuracy of dose calculations. (authors)

  9. Supervised Sequence Labelling with Recurrent Neural Networks

    CERN Document Server

    Graves, Alex

    2012-01-01

    Supervised sequence labelling is a vital area of machine learning, encompassing tasks such as speech, handwriting and gesture recognition, protein secondary structure prediction and part-of-speech tagging. Recurrent neural networks are powerful sequence learning tools—robust to input noise and distortion, able to exploit long-range contextual information—that would seem ideally suited to such problems. However their role in large-scale sequence labelling systems has so far been auxiliary.    The goal of this book is a complete framework for classifying and transcribing sequential data with recurrent neural networks only. Three main innovations are introduced in order to realise this goal. Firstly, the connectionist temporal classification output layer allows the framework to be trained with unsegmented target sequences, such as phoneme-level speech transcriptions; this is in contrast to previous connectionist approaches, which were dependent on error-prone prior segmentation. Secondly, multidimensional...

  10. CONSTRUCTION COST PREDICTION USING NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    Smita K Magdum

    2017-10-01

    Full Text Available Construction cost prediction is important for construction firms to compete and grow in the industry. Accurate construction cost prediction in the early stage of project is important for project feasibility studies and successful completion. There are many factors that affect the cost prediction. This paper presents construction cost prediction as multiple regression model with cost of six materials as independent variables. The objective of this paper is to develop neural networks and multilayer perceptron based model for construction cost prediction. Different models of NN and MLP are developed with varying hidden layer size and hidden nodes. Four artificial neural network models and twelve multilayer perceptron models are compared. MLP and NN give better results than statistical regression method. As compared to NN, MLP works better on training dataset but fails on testing dataset. Five activation functions are tested to identify suitable function for the problem. ‘elu' transfer function gives better results than other transfer function.

  11. Optical implementation of a feature-based neural network with application to automatic target recognition

    Science.gov (United States)

    Chao, Tien-Hsin; Stoner, William W.

    1993-01-01

    An optical neural network based on the neocognitron paradigm is introduced. A novel aspect of the architecture design is shift-invariant multichannel Fourier optical correlation within each processing layer. Multilayer processing is achieved by feeding back the ouput of the feature correlator interatively to the input spatial light modulator and by updating the Fourier filters. By training the neural net with characteristic features extracted from the target images, successful pattern recognition with intraclass fault tolerance and interclass discrimination is achieved. A detailed system description is provided. Experimental demonstrations of a two-layer neural network for space-object discrimination is also presented.

  12. Automatic target recognition using a feature-based optical neural network

    Science.gov (United States)

    Chao, Tien-Hsin

    1992-01-01

    An optical neural network based upon the Neocognitron paradigm (K. Fukushima et al. 1983) is introduced. A novel aspect of the architectural design is shift-invariant multichannel Fourier optical correlation within each processing layer. Multilayer processing is achieved by iteratively feeding back the output of the feature correlator to the input spatial light modulator and updating the Fourier filters. By training the neural net with characteristic features extracted from the target images, successful pattern recognition with intra-class fault tolerance and inter-class discrimination is achieved. A detailed system description is provided. Experimental demonstration of a two-layer neural network for space objects discrimination is also presented.

  13. Parallel consensual neural networks.

    Science.gov (United States)

    Benediktsson, J A; Sveinsson, J R; Ersoy, O K; Swain, P H

    1997-01-01

    A new type of a neural-network architecture, the parallel consensual neural network (PCNN), is introduced and applied in classification/data fusion of multisource remote sensing and geographic data. The PCNN architecture is based on statistical consensus theory and involves using stage neural networks with transformed input data. The input data are transformed several times and the different transformed data are used as if they were independent inputs. The independent inputs are first classified using the stage neural networks. The output responses from the stage networks are then weighted and combined to make a consensual decision. In this paper, optimization methods are used in order to weight the outputs from the stage networks. Two approaches are proposed to compute the data transforms for the PCNN, one for binary data and another for analog data. The analog approach uses wavelet packets. The experimental results obtained with the proposed approach show that the PCNN outperforms both a conjugate-gradient backpropagation neural network and conventional statistical methods in terms of overall classification accuracy of test data.

  14. Quantitative analysis of volatile organic compounds using ion mobility spectra and cascade correlation neural networks

    Science.gov (United States)

    Harrington, Peter DEB.; Zheng, Peng

    1995-01-01

    Ion Mobility Spectrometry (IMS) is a powerful technique for trace organic analysis in the gas phase. Quantitative measurements are difficult, because IMS has a limited linear range. Factors that may affect the instrument response are pressure, temperature, and humidity. Nonlinear calibration methods, such as neural networks, may be ideally suited for IMS. Neural networks have the capability of modeling complex systems. Many neural networks suffer from long training times and overfitting. Cascade correlation neural networks train at very fast rates. They also build their own topology, that is a number of layers and number of units in each layer. By controlling the decay parameter in training neural networks, reproducible and general models may be obtained.

  15. Development of neural network simulating power distribution of a BWR fuel bundle

    International Nuclear Information System (INIS)

    Tanabe, A.; Yamamoto, T.; Shinfuku, K.; Nakamae, T.

    1992-01-01

    A neural network model is developed to simulate the precise nuclear physics analysis program code for quick scoping survey calculations. The relation between enrichment and local power distribution of BWR fuel bundles was learned using two layers neural network (ENET). A new model is to introduce burnable neutron absorber (Gadolinia), added to several fuel rods to decrease initial reactivity of fresh bundle. The 2nd stages three layers neural network (GNET) is added on the 1st stage network ENET. GNET studies the local distribution difference caused by Gadolinia. Using this method, it becomes possible to survey of the gradients of sigmoid functions and back propagation constants with reasonable time. Using 99 learning patterns of zero burnup, good error convergence curve is obtained after many trials. This neural network model is able to simulate no learned cases fairly as well as the learned cases. Computer time of this neural network model is about 100 times faster than a precise analysis model. (author)

  16. Construction of a Piezoresistive Neural Sensor Array

    Science.gov (United States)

    Carlson, W. B.; Schulze, W. A.; Pilgrim, P. M.

    1996-01-01

    The construction of a piezoresistive - piezoelectric sensor (or actuator) array is proposed using 'neural' connectivity for signal recognition and possible actuation functions. A closer integration of the sensor and decision functions is necessary in order to achieve intrinsic identification within the sensor. A neural sensor is the next logical step in development of truly 'intelligent' arrays. This proposal will integrate 1-3 polymer piezoresistors and MLC electroceramic devices for applications involving acoustic identification. The 'intelligent' piezoresistor -piezoelectric system incorporates printed resistors, composite resistors, and a feedback for the resetting of resistances. A model of a design is proposed in order to simulate electromechanical resistor interactions. The goal of optimizing a sensor geometry for improving device reliability, training, & signal identification capabilities is the goal of this work. At present, studies predict performance of a 'smart' device with a significant control of 'effective' compliance over a narrow pressure range due to a piezoresistor percolation threshold. An interesting possibility may be to use an array of control elements to shift the threshold function in order to change the level of resistance in a neural sensor array for identification, or, actuation applications. The proposed design employs elements of: (1) conductor loaded polymers for a 'fast' RC time constant response; and (2) multilayer ceramics for actuation or sensing and shifting of resistance in the polymer. Other material possibilities also exist using magnetoresistive layered systems for shifting the resistance. It is proposed to use a neural net configuration to test and to help study the possible changes required in the materials design of these devices. Numerical design models utilize electromechanical elements, in conjunction with structural elements in order to simulate piezoresistively controlled actuators and changes in resistance of sensors

  17. Single-Iteration Learning Algorithm for Feed-Forward Neural Networks

    Energy Technology Data Exchange (ETDEWEB)

    Barhen, J.; Cogswell, R.; Protopopescu, V.

    1999-07-31

    A new methodology for neural learning is presented, whereby only a single iteration is required to train a feed-forward network with near-optimal results. To this aim, a virtual input layer is added to the multi-layer architecture. The virtual input layer is connected to the nominal input layer by a specird nonlinear transfer function, and to the fwst hidden layer by regular (linear) synapses. A sequence of alternating direction singular vrdue decompositions is then used to determine precisely the inter-layer synaptic weights. This algorithm exploits the known separability of the linear (inter-layer propagation) and nonlinear (neuron activation) aspects of information &ansfer within a neural network.

  18. Layering and Ordering in Electrochemical Double Layers

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yihua [Materials Science Division, Argonne National Laboratory, Argonne, Illinois 60439, United States; Kawaguchi, Tomoya [Materials Science Division, Argonne National Laboratory, Argonne, Illinois 60439, United States; Pierce, Michael S. [Rochester Institute of Technology, School of Physics and Astronomy, Rochester, New York 14623, United States; Komanicky, Vladimir [Faculty of Science, Safarik University, 041 54 Kosice, Slovakia; You, Hoydoo [Materials Science Division, Argonne National Laboratory, Argonne, Illinois 60439, United States

    2018-02-26

    Electrochemical double layers (EDL) form at electrified interfaces. While Gouy-Chapman model describes moderately charged EDL, formation of Stern layers was predicted for highly charged EDL. Our results provide structural evidence for a Stern layer of cations, at potentials close to hydrogen evolution in alkali fluoride and chloride electrolytes. Layering was observed by x-ray crystal truncation rods and atomic-scale recoil responses of Pt(111) surface layers. Ordering in the layer is confirmed by glancing-incidence in-plane diffraction measurements.

  19. Neural Architectures for Control

    Science.gov (United States)

    Peterson, James K.

    1991-01-01

    The cerebellar model articulated controller (CMAC) neural architectures are shown to be viable for the purposes of real-time learning and control. Software tools for the exploration of CMAC performance are developed for three hardware platforms, the MacIntosh, the IBM PC, and the SUN workstation. All algorithm development was done using the C programming language. These software tools were then used to implement an adaptive critic neuro-control design that learns in real-time how to back up a trailer truck. The truck backer-upper experiment is a standard performance measure in the neural network literature, but previously the training of the controllers was done off-line. With the CMAC neural architectures, it was possible to train the neuro-controllers on-line in real-time on a MS-DOS PC 386. CMAC neural architectures are also used in conjunction with a hierarchical planning approach to find collision-free paths over 2-D analog valued obstacle fields. The method constructs a coarse resolution version of the original problem and then finds the corresponding coarse optimal path using multipass dynamic programming. CMAC artificial neural architectures are used to estimate the analog transition costs that dynamic programming requires. The CMAC architectures are trained in real-time for each obstacle field presented. The coarse optimal path is then used as a baseline for the construction of a fine scale optimal path through the original obstacle array. These results are a very good indication of the potential power of the neural architectures in control design. In order to reach as wide an audience as possible, we have run a seminar on neuro-control that has met once per week since 20 May 1991. This seminar has thoroughly discussed the CMAC architecture, relevant portions of classical control, back propagation through time, and adaptive critic designs.

  20. Neural network construction via back-propagation

    International Nuclear Information System (INIS)

    Burwick, T.T.

    1994-06-01

    A method is presented that combines back-propagation with multi-layer neural network construction. Back-propagation is used not only to adjust the weights but also the signal functions. Going from one network to an equivalent one that has additional linear units, the non-linearity of these units and thus their effective presence is then introduced via back-propagation (weight-splitting). The back-propagated error causes the network to include new units in order to minimize the error function. We also show how this formalism allows to escape local minima

  1. Sacred or Neural?

    DEFF Research Database (Denmark)

    Runehov, Anne Leona Cesarine

    Are religious spiritual experiences merely the product of the human nervous system? Anne L.C. Runehov investigates the potential of contemporary neuroscience to explain religious experiences. Following the footsteps of Michael Persinger, Andrew Newberg and Eugene d'Aquili she defines...... the terminological bounderies of "religious experiences" and explores the relevant criteria for the proper evaluation of scientific research, with a particular focus on the validity of reductionist models. Runehov's theis is that the perspectives looked at do not necessarily exclude each other but can be merged....... The question "sacred or neural?" becomes a statement "sacred and neural". The synergies thus produced provide manifold opportunities for interdisciplinary dialogue and research....

  2. Deconvolution using a neural network

    Energy Technology Data Exchange (ETDEWEB)

    Lehman, S.K.

    1990-11-15

    Viewing one dimensional deconvolution as a matrix inversion problem, we compare a neural network backpropagation matrix inverse with LMS, and pseudo-inverse. This is a largely an exercise in understanding how our neural network code works. 1 ref.

  3. Introduction to Artificial Neural Networks

    DEFF Research Database (Denmark)

    Larsen, Jan

    1999-01-01

    The note addresses introduction to signal analysis and classification based on artificial feed-forward neural networks.......The note addresses introduction to signal analysis and classification based on artificial feed-forward neural networks....

  4. Temporal neural networks and transient analysis of complex engineering systems

    Science.gov (United States)

    Uluyol, Onder

    A theory is introduced for a multi-layered Local Output Gamma Feedback (LOGF) neural network within the paradigm of Locally-Recurrent Globally-Feedforward neural networks. It is developed for the identification, prediction, and control tasks of spatio-temporal systems and allows for the presentation of different time scales through incorporation of a gamma memory. It is initially applied to the tasks of sunspot and Mackey-Glass series prediction as benchmarks, then it is extended to the task of power level control of a nuclear reactor at different fuel cycle conditions. The developed LOGF neuron model can also be viewed as a Transformed Input and State (TIS) Gamma memory for neural network architectures for temporal processing. The novel LOGF neuron model extends the static neuron model by incorporating into it a short-term memory structure in the form of a digital gamma filter. A feedforward neural network made up of LOGF neurons can thus be used to model dynamic systems. A learning algorithm based upon the Backpropagation-Through-Time (BTT) approach is derived. It is applicable for training a general L-layer LOGF neural network. The spatial and temporal weights and parameters of the network are iteratively optimized for a given problem using the derived learning algorithm.

  5. Neural Network Ensembles

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Salamon, Peter

    1990-01-01

    We propose several means for improving the performance an training of neural networks for classification. We use crossvalidation as a tool for optimizing network parameters and architecture. We show further that the remaining generalization error can be reduced by invoking ensembles of similar...... networks....

  6. Neural correlates of consciousness

    African Journals Online (AJOL)

    neural cells.1 Under this approach, consciousness is believed to be a product of the ... possible only when the 40 Hz electrical hum is sustained among the brain circuits, ... expect the brain stem ascending reticular activating system. (ARAS) and the ... related synchrony of cortical neurons.11 Indeed, stimulation of brainstem ...

  7. Neural Networks and Micromechanics

    Science.gov (United States)

    Kussul, Ernst; Baidyk, Tatiana; Wunsch, Donald C.

    The title of the book, "Neural Networks and Micromechanics," seems artificial. However, the scientific and technological developments in recent decades demonstrate a very close connection between the two different areas of neural networks and micromechanics. The purpose of this book is to demonstrate this connection. Some artificial intelligence (AI) methods, including neural networks, could be used to improve automation system performance in manufacturing processes. However, the implementation of these AI methods within industry is rather slow because of the high cost of conducting experiments using conventional manufacturing and AI systems. To lower the cost, we have developed special micromechanical equipment that is similar to conventional mechanical equipment but of much smaller size and therefore of lower cost. This equipment could be used to evaluate different AI methods in an easy and inexpensive way. The proved methods could be transferred to industry through appropriate scaling. In this book, we describe the prototypes of low cost microequipment for manufacturing processes and the implementation of some AI methods to increase precision, such as computer vision systems based on neural networks for microdevice assembly and genetic algorithms for microequipment characterization and the increase of microequipment precision.

  8. Introduction to neural networks

    International Nuclear Information System (INIS)

    Pavlopoulos, P.

    1996-01-01

    This lecture is a presentation of today's research in neural computation. Neural computation is inspired by knowledge from neuro-science. It draws its methods in large degree from statistical physics and its potential applications lie mainly in computer science and engineering. Neural networks models are algorithms for cognitive tasks, such as learning and optimization, which are based on concepts derived from research into the nature of the brain. The lecture first gives an historical presentation of neural networks development and interest in performing complex tasks. Then, an exhaustive overview of data management and networks computation methods is given: the supervised learning and the associative memory problem, the capacity of networks, the Perceptron networks, the functional link networks, the Madaline (Multiple Adalines) networks, the back-propagation networks, the reduced coulomb energy (RCE) networks, the unsupervised learning and the competitive learning and vector quantization. An example of application in high energy physics is given with the trigger systems and track recognition system (track parametrization, event selection and particle identification) developed for the CPLEAR experiment detectors from the LEAR at CERN. (J.S.). 56 refs., 20 figs., 1 tab., 1 appendix

  9. Learning from neural control.

    Science.gov (United States)

    Wang, Cong; Hill, David J

    2006-01-01

    One of the amazing successes of biological systems is their ability to "learn by doing" and so adapt to their environment. In this paper, first, a deterministic learning mechanism is presented, by which an appropriately designed adaptive neural controller is capable of learning closed-loop system dynamics during tracking control to a periodic reference orbit. Among various neural network (NN) architectures, the localized radial basis function (RBF) network is employed. A property of persistence of excitation (PE) for RBF networks is established, and a partial PE condition of closed-loop signals, i.e., the PE condition of a regression subvector constructed out of the RBFs along a periodic state trajectory, is proven to be satisfied. Accurate NN approximation for closed-loop system dynamics is achieved in a local region along the periodic state trajectory, and a learning ability is implemented during a closed-loop feedback control process. Second, based on the deterministic learning mechanism, a neural learning control scheme is proposed which can effectively recall and reuse the learned knowledge to achieve closed-loop stability and improved control performance. The significance of this paper is that the presented deterministic learning mechanism and the neural learning control scheme provide elementary components toward the development of a biologically-plausible learning and control methodology. Simulation studies are included to demonstrate the effectiveness of the approach.

  10. Neural systems for control

    National Research Council Canada - National Science Library

    Omidvar, Omid; Elliott, David L

    1997-01-01

    ... is reprinted with permission from A. Barto, "Reinforcement Learning," Handbook of Brain Theory and Neural Networks, M.A. Arbib, ed.. The MIT Press, Cambridge, MA, pp. 804-809, 1995. Chapter 4, Figures 4-5 and 7-9 and Tables 2-5, are reprinted with permission, from S. Cho, "Map Formation in Proprioceptive Cortex," International Jour...

  11. Neural underpinnings of music

    DEFF Research Database (Denmark)

    Vuust, Peter; Gebauer, Line K; Witek, Maria A G

    2014-01-01

    . According to this theory, perception and learning is manifested through the brain’s Bayesian minimization of the error between the input to the brain and the brain’s prior expectations. Fourth, empirical studies of neural and behavioral effects of syncopation, polyrhythm and groove will be reported, and we...

  12. Neural nets for massively parallel optimization

    Science.gov (United States)

    Dixon, Laurence C. W.; Mills, David

    1992-07-01

    To apply massively parallel processing systems to the solution of large scale optimization problems it is desirable to be able to evaluate any function f(z), z (epsilon) Rn in a parallel manner. The theorem of Cybenko, Hecht Nielsen, Hornik, Stinchcombe and White, and Funahasi shows that this can be achieved by a neural network with one hidden layer. In this paper we address the problem of the number of nodes required in the layer to achieve a given accuracy in the function and gradient values at all points within a given n dimensional interval. The type of activation function needed to obtain nonsingular Hessian matrices is described and a strategy for obtaining accurate minimal networks presented.

  13. Deep neural mapping support vector machines.

    Science.gov (United States)

    Li, Yujian; Zhang, Ting

    2017-09-01

    The choice of kernel has an important effect on the performance of a support vector machine (SVM). The effect could be reduced by NEUROSVM, an architecture using multilayer perceptron for feature extraction and SVM for classification. In binary classification, a general linear kernel NEUROSVM can be theoretically simplified as an input layer, many hidden layers, and an SVM output layer. As a feature extractor, the sub-network composed of the input and hidden layers is first trained together with a virtual ordinary output layer by backpropagation, then with the output of its last hidden layer taken as input of the SVM classifier for further training separately. By taking the sub-network as a kernel mapping from the original input space into a feature space, we present a novel model, called deep neural mapping support vector machine (DNMSVM), from the viewpoint of deep learning. This model is also a new and general kernel learning method, where the kernel mapping is indeed an explicit function expressed as a sub-network, different from an implicit function induced by a kernel function traditionally. Moreover, we exploit a two-stage procedure of contrastive divergence learning and gradient descent for DNMSVM to jointly training an adaptive kernel mapping instead of a kernel function, without requirement of kernel tricks. As a whole of the sub-network and the SVM classifier, the joint training of DNMSVM is done by using gradient descent to optimize the objective function with the sub-network layer-wise pre-trained via contrastive divergence learning of restricted Boltzmann machines. Compared to the separate training of NEUROSVM, the joint training is a new algorithm for DNMSVM to have advantages over NEUROSVM. Experimental results show that DNMSVM can outperform NEUROSVM and RBFSVM (i.e., SVM with the kernel of radial basis function), demonstrating its effectiveness. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. An Improved Convolutional Neural Network on Crowd Density Estimation

    Directory of Open Access Journals (Sweden)

    Pan Shao-Yun

    2016-01-01

    Full Text Available In this paper, a new method is proposed for crowd density estimation. An improved convolutional neural network is combined with traditional texture feature. The data calculated by the convolutional layer can be treated as a new kind of features.So more useful information of images can be extracted by different features.In the meantime, the size of image has little effect on the result of convolutional neural network. Experimental results indicate that our scheme has adequate performance to allow for its use in real world applications.

  15. Optimization of multilayer neural network parameters for speaker recognition

    Science.gov (United States)

    Tovarek, Jaromir; Partila, Pavol; Rozhon, Jan; Voznak, Miroslav; Skapa, Jan; Uhrin, Dominik; Chmelikova, Zdenka

    2016-05-01

    This article discusses the impact of multilayer neural network parameters for speaker identification. The main task of speaker identification is to find a specific person in the known set of speakers. It means that the voice of an unknown speaker (wanted person) belongs to a group of reference speakers from the voice database. One of the requests was to develop the text-independent system, which means to classify wanted person regardless of content and language. Multilayer neural network has been used for speaker identification in this research. Artificial neural network (ANN) needs to set parameters like activation function of neurons, steepness of activation functions, learning rate, the maximum number of iterations and a number of neurons in the hidden and output layers. ANN accuracy and validation time are directly influenced by the parameter settings. Different roles require different settings. Identification accuracy and ANN validation time were evaluated with the same input data but different parameter settings. The goal was to find parameters for the neural network with the highest precision and shortest validation time. Input data of neural networks are a Mel-frequency cepstral coefficients (MFCC). These parameters describe the properties of the vocal tract. Audio samples were recorded for all speakers in a laboratory environment. Training, testing and validation data set were split into 70, 15 and 15 %. The result of the research described in this article is different parameter setting for the multilayer neural network for four speakers.

  16. Discriminating lysosomal membrane protein types using dynamic neural network.

    Science.gov (United States)

    Tripathi, Vijay; Gupta, Dwijendra Kumar

    2014-01-01

    This work presents a dynamic artificial neural network methodology, which classifies the proteins into their classes from their sequences alone: the lysosomal membrane protein classes and the various other membranes protein classes. In this paper, neural networks-based lysosomal-associated membrane protein type prediction system is proposed. Different protein sequence representations are fused to extract the features of a protein sequence, which includes seven feature sets; amino acid (AA) composition, sequence length, hydrophobic group, electronic group, sum of hydrophobicity, R-group, and dipeptide composition. To reduce the dimensionality of the large feature vector, we applied the principal component analysis. The probabilistic neural network, generalized regression neural network, and Elman regression neural network (RNN) are used as classifiers and compared with layer recurrent network (LRN), a dynamic network. The dynamic networks have memory, i.e. its output depends not only on the input but the previous outputs also. Thus, the accuracy of LRN classifier among all other artificial neural networks comes out to be the highest. The overall accuracy of jackknife cross-validation is 93.2% for the data-set. These predicted results suggest that the method can be effectively applied to discriminate lysosomal associated membrane proteins from other membrane proteins (Type-I, Outer membrane proteins, GPI-Anchored) and Globular proteins, and it also indicates that the protein sequence representation can better reflect the core feature of membrane proteins than the classical AA composition.

  17. Financial time series prediction using spiking neural networks.

    Science.gov (United States)

    Reid, David; Hussain, Abir Jaafar; Tawfik, Hissam

    2014-01-01

    In this paper a novel application of a particular type of spiking neural network, a Polychronous Spiking Network, was used for financial time series prediction. It is argued that the inherent temporal capabilities of this type of network are suited to non-stationary data such as this. The performance of the spiking neural network was benchmarked against three systems: two "traditional", rate-encoded, neural networks; a Multi-Layer Perceptron neural network and a Dynamic Ridge Polynomial neural network, and a standard Linear Predictor Coefficients model. For this comparison three non-stationary and noisy time series were used: IBM stock data; US/Euro exchange rate data, and the price of Brent crude oil. The experiments demonstrated favourable prediction results for the Spiking Neural Network in terms of Annualised Return and prediction error for 5-Step ahead predictions. These results were also supported by other relevant metrics such as Maximum Drawdown and Signal-To-Noise ratio. This work demonstrated the applicability of the Polychronous Spiking Network to financial data forecasting and this in turn indicates the potential of using such networks over traditional systems in difficult to manage non-stationary environments.

  18. Standard cell-based implementation of a digital optoelectronic neural-network hardware.

    Science.gov (United States)

    Maier, K D; Beckstein, C; Blickhan, R; Erhard, W

    2001-03-10

    A standard cell-based implementation of a digital optoelectronic neural-network architecture is presented. The overall structure of the multilayer perceptron network that was used, the optoelectronic interconnection system between the layers, and all components required in each layer are defined. The design process from VHDL-based modeling from synthesis and partly automatic placing and routing to the final editing of one layer of the circuit of the multilayer perceptrons are described. A suitable approach for the standard cell-based design of optoelectronic systems is presented, and shortcomings of the design tool that was used are pointed out. The layout for the microelectronic circuit of one layer in a multilayer perceptron neural network with a performance potential 1 magnitude higher than neural networks that are purely electronic based has been successfully designed.

  19. Functional model of biological neural networks.

    Science.gov (United States)

    Lo, James Ting-Ho

    2010-12-01

    A functional model of biological neural networks, called temporal hierarchical probabilistic associative memory (THPAM), is proposed in this paper. THPAM comprises functional models of dendritic trees for encoding inputs to neurons, a first type of neuron for generating spike trains, a second type of neuron for generating graded signals to modulate neurons of the first type, supervised and unsupervised Hebbian learning mechanisms for easy learning and retrieving, an arrangement of dendritic trees for maximizing generalization, hardwiring for rotation-translation-scaling invariance, and feedback connections with different delay durations for neurons to make full use of present and past informations generated by neurons in the same and higher layers. These functional models and their processing operations have many functions of biological neural networks that have not been achieved by other models in the open literature and provide logically coherent answers to many long-standing neuroscientific questions. However, biological justifications of these functional models and their processing operations are required for THPAM to qualify as a macroscopic model (or low-order approximate) of biological neural networks.

  20. Handwritten Digits Recognition Using Neural Computing

    Directory of Open Access Journals (Sweden)

    Călin Enăchescu

    2009-12-01

    Full Text Available In this paper we present a method for the recognition of handwritten digits and a practical implementation of this method for real-time recognition. A theoretical framework for the neural networks used to classify the handwritten digits is also presented.The classification task is performed using a Convolutional Neural Network (CNN. CNN is a special type of multy-layer neural network, being trained with an optimized version of the back-propagation learning algorithm.CNN is designed to recognize visual patterns directly from pixel images with minimal preprocessing, being capable to recognize patterns with extreme variability (such as handwritten characters, and with robustness to distortions and simple geometric transformations.The main contributions of this paper are related to theoriginal methods for increasing the efficiency of the learning algorithm by preprocessing the images before the learning process and a method for increasing the precision and performance for real-time applications, by removing the non useful information from the background.By combining these strategies we have obtained an accuracy of 96.76%, using as training set the NIST (National Institute of Standards and Technology database.

  1. Neural field model of memory-guided search.

    Science.gov (United States)

    Kilpatrick, Zachary P; Poll, Daniel B

    2017-12-01

    Many organisms can remember locations they have previously visited during a search. Visual search experiments have shown exploration is guided away from these locations, reducing redundancies in the search path before finding a hidden target. We develop and analyze a two-layer neural field model that encodes positional information during a search task. A position-encoding layer sustains a bump attractor corresponding to the searching agent's current location, and search is modeled by velocity input that propagates the bump. A memory layer sustains persistent activity bounded by a wave front, whose edges expand in response to excitatory input from the position layer. Search can then be biased in response to remembered locations, influencing velocity inputs to the position layer. Asymptotic techniques are used to reduce the dynamics of our model to a low-dimensional system of equations that track the bump position and front boundary. Performance is compared for different target-finding tasks.

  2. Neural field model of memory-guided search

    Science.gov (United States)

    Kilpatrick, Zachary P.; Poll, Daniel B.

    2017-12-01

    Many organisms can remember locations they have previously visited during a search. Visual search experiments have shown exploration is guided away from these locations, reducing redundancies in the search path before finding a hidden target. We develop and analyze a two-layer neural field model that encodes positional information during a search task. A position-encoding layer sustains a bump attractor corresponding to the searching agent's current location, and search is modeled by velocity input that propagates the bump. A memory layer sustains persistent activity bounded by a wave front, whose edges expand in response to excitatory input from the position layer. Search can then be biased in response to remembered locations, influencing velocity inputs to the position layer. Asymptotic techniques are used to reduce the dynamics of our model to a low-dimensional system of equations that track the bump position and front boundary. Performance is compared for different target-finding tasks.

  3. Bioprinting for Neural Tissue Engineering.

    Science.gov (United States)

    Knowlton, Stephanie; Anand, Shivesh; Shah, Twisha; Tasoglu, Savas

    2018-01-01

    Bioprinting is a method by which a cell-encapsulating bioink is patterned to create complex tissue architectures. Given the potential impact of this technology on neural research, we review the current state-of-the-art approaches for bioprinting neural tissues. While 2D neural cultures are ubiquitous for studying neural cells, 3D cultures can more accurately replicate the microenvironment of neural tissues. By bioprinting neuronal constructs, one can precisely control the microenvironment by specifically formulating the bioink for neural tissues, and by spatially patterning cell types and scaffold properties in three dimensions. We review a range of bioprinted neural tissue models and discuss how they can be used to observe how neurons behave, understand disease processes, develop new therapies and, ultimately, design replacement tissues. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. A gentle introduction to artificial neural networks.

    Science.gov (United States)

    Zhang, Zhongheng

    2016-10-01

    Artificial neural network (ANN) is a flexible and powerful machine learning technique. However, it is under utilized in clinical medicine because of its technical challenges. The article introduces some basic ideas behind ANN and shows how to build ANN using R in a step-by-step framework. In topology and function, ANN is in analogue to the human brain. There are input and output signals transmitting from input to output nodes. Input signals are weighted before reaching output nodes according to their respective importance. Then the combined signal is processed by activation function. I simulated a simple example to illustrate how to build a simple ANN model using nnet() function. This function allows for one hidden layer with varying number of units in that layer. The basic structure of ANN can be visualized with plug-in plot.nnet() function. The plot function is powerful that it allows for varieties of adjustment to the appearance of the neural networks. Prediction with ANN can be performed with predict() function, similar to that of conventional generalized linear models. Finally, the prediction power of ANN is examined using confusion matrix and average accuracy. It appears that ANN is slightly better than conventional linear model.

  5. Classifying images using restricted Boltzmann machines and convolutional neural networks

    Science.gov (United States)

    Zhao, Zhijun; Xu, Tongde; Dai, Chenyu

    2017-07-01

    To improve the feature recognition ability of deep model transfer learning, we propose a hybrid deep transfer learning method for image classification based on restricted Boltzmann machines (RBM) and convolutional neural networks (CNNs). It integrates learning abilities of two models, which conducts subject classification by exacting structural higher-order statistics features of images. While the method transfers the trained convolutional neural networks to the target datasets, fully-connected layers can be replaced by restricted Boltzmann machine layers; then the restricted Boltzmann machine layers and Softmax classifier are retrained, and BP neural network can be used to fine-tuned the hybrid model. The restricted Boltzmann machine layers has not only fully integrated the whole feature maps, but also learns the statistical features of target datasets in the view of the biggest logarithmic likelihood, thus removing the effects caused by the content differences between datasets. The experimental results show that the proposed method has improved the accuracy of image classification, outperforming other methods on Pascal VOC2007 and Caltech101 datasets.

  6. Convolutional Neural Network for Histopathological Analysis of Osteosarcoma.

    Science.gov (United States)

    Mishra, Rashika; Daescu, Ovidiu; Leavey, Patrick; Rakheja, Dinesh; Sengupta, Anita

    2018-03-01

    Pathologists often deal with high complexity and sometimes disagreement over osteosarcoma tumor classification due to cellular heterogeneity in the dataset. Segmentation and classification of histology tissue in H&E stained tumor image datasets is a challenging task because of intra-class variations, inter-class similarity, crowded context, and noisy data. In recent years, deep learning approaches have led to encouraging results in breast cancer and prostate cancer analysis. In this article, we propose convolutional neural network (CNN) as a tool to improve efficiency and accuracy of osteosarcoma tumor classification into tumor classes (viable tumor, necrosis) versus nontumor. The proposed CNN architecture contains eight learned layers: three sets of stacked two convolutional layers interspersed with max pooling layers for feature extraction and two fully connected layers with data augmentation strategies to boost performance. The use of a neural network results in higher accuracy of average 92% for the classification. We compare the proposed architecture with three existing and proven CNN architectures for image classification: AlexNet, LeNet, and VGGNet. We also provide a pipeline to calculate percentage necrosis in a given whole slide image. We conclude that the use of neural networks can assure both high accuracy and efficiency in osteosarcoma classification.

  7. Processing of chromatic information in a deep convolutional neural network.

    Science.gov (United States)

    Flachot, Alban; Gegenfurtner, Karl R

    2018-04-01

    Deep convolutional neural networks are a class of machine-learning algorithms capable of solving non-trivial tasks, such as object recognition, with human-like performance. Little is known about the exact computations that deep neural networks learn, and to what extent these computations are similar to the ones performed by the primate brain. Here, we investigate how color information is processed in the different layers of the AlexNet deep neural network, originally trained on object classification of over 1.2M images of objects in their natural contexts. We found that the color-responsive units in the first layer of AlexNet learned linear features and were broadly tuned to two directions in color space, analogously to what is known of color responsive cells in the primate thalamus. Moreover, these directions are decorrelated and lead to statistically efficient representations, similar to the cardinal directions of the second-stage color mechanisms in primates. We also found, in analogy to the early stages of the primate visual system, that chromatic and achromatic information were segregated in the early layers of the network. Units in the higher layers of AlexNet exhibit on average a lower responsivity for color than units at earlier stages.

  8. Tuning Recurrent Neural Networks for Recognizing Handwritten Arabic Words

    KAUST Repository

    Qaralleh, Esam

    2013-10-01

    Artificial neural networks have the abilities to learn by example and are capable of solving problems that are hard to solve using ordinary rule-based programming. They have many design parameters that affect their performance such as the number and sizes of the hidden layers. Large sizes are slow and small sizes are generally not accurate. Tuning the neural network size is a hard task because the design space is often large and training is often a long process. We use design of experiments techniques to tune the recurrent neural network used in an Arabic handwriting recognition system. We show that best results are achieved with three hidden layers and two subsampling layers. To tune the sizes of these five layers, we use fractional factorial experiment design to limit the number of experiments to a feasible number. Moreover, we replicate the experiment configuration multiple times to overcome the randomness in the training process. The accuracy and time measurements are analyzed and modeled. The two models are then used to locate network sizes that are on the Pareto optimal frontier. The approach described in this paper reduces the label error from 26.2% to 19.8%.

  9. Determining the confidence levels of sensor outputs using neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Broten, G S; Wood, H C [Saskatchewan Univ., Saskatoon, SK (Canada). Dept. of Electrical Engineering

    1996-12-31

    This paper describes an approach for determining the confidence level of a sensor output using multi-sensor arrays, sensor fusion and artificial neural networks. The authors have shown in previous work that sensor fusion and artificial neural networks can be used to learn the relationships between the outputs of an array of simulated partially selective sensors and the individual analyte concentrations in a mixture of analyses. Other researchers have shown that an array of partially selective sensors can be used to determine the individual gas concentrations in a gaseous mixture. The research reported in this paper shows that it is possible to extract confidence level information from an array of partially selective sensors using artificial neural networks. The confidence level of a sensor output is defined as a numeric value, ranging from 0% to 100%, that indicates the confidence associated with a output of a given sensor. A three layer back-propagation neural network was trained on a subset of the sensor confidence level space, and was tested for its ability to generalize, where the confidence level space is defined as all possible deviations from the correct sensor output. A learning rate of 0.1 was used and no momentum terms were used in the neural network. This research has shown that an artificial neural network can accurately estimate the confidence level of individual sensors in an array of partially selective sensors. This research has also shown that the neural network`s ability to determine the confidence level is influenced by the complexity of the sensor`s response and that the neural network is able to estimate the confidence levels even if more than one sensor is in error. The fundamentals behind this research could be applied to other configurations besides arrays of partially selective sensors, such as an array of sensors separated spatially. An example of such a configuration could be an array of temperature sensors in a tank that is not in

  10. Fast neutron spectra determination by threshold activation detectors using neural networks

    International Nuclear Information System (INIS)

    Kardan, M.R.; Koohi-Fayegh, R.; Setayeshi, S.; Ghiassi-Nejad, M.

    2004-01-01

    Neural network method was used for fast neutron spectra unfolding in spectrometry by threshold activation detectors. The input layer of the neural networks consisted of 11 neurons for the specific activities of neutron-induced nuclear reaction products, while the output layers were fast neutron spectra which had been subdivided into 6, 8, 10, 12, 15 and 20 energy bins. Neural network training was performed by 437 fast neutron spectra and corresponding threshold activation detector readings. The trained neural network have been applied for unfolding 50 spectra, which were not in training sets and the results were compared with real spectra and unfolded spectra by SANDII. The best results belong to 10 energy bin spectra. The neural network was also trained by detector readings with 5% uncertainty and the response of the trained neural network to detector readings with 5%, 10%, 15%, 20%, 25% and 50% uncertainty was compared with real spectra. Neural network algorithm, in comparison with other unfolding methods, is very fast and needless to detector response matrix and any prior information about spectra and also the outputs have low sensitivity to uncertainty in the activity measurements. The results show that the neural network algorithm is useful when a fast response is required with reasonable accuracy

  11. Deep Galaxy: Classification of Galaxies based on Deep Convolutional Neural Networks

    OpenAIRE

    Khalifa, Nour Eldeen M.; Taha, Mohamed Hamed N.; Hassanien, Aboul Ella; Selim, I. M.

    2017-01-01

    In this paper, a deep convolutional neural network architecture for galaxies classification is presented. The galaxy can be classified based on its features into main three categories Elliptical, Spiral, and Irregular. The proposed deep galaxies architecture consists of 8 layers, one main convolutional layer for features extraction with 96 filters, followed by two principles fully connected layers for classification. It is trained over 1356 images and achieved 97.272% in testing accuracy. A c...

  12. Analysis of neural data

    CERN Document Server

    Kass, Robert E; Brown, Emery N

    2014-01-01

    Continual improvements in data collection and processing have had a huge impact on brain research, producing data sets that are often large and complicated. By emphasizing a few fundamental principles, and a handful of ubiquitous techniques, Analysis of Neural Data provides a unified treatment of analytical methods that have become essential for contemporary researchers. Throughout the book ideas are illustrated with more than 100 examples drawn from the literature, ranging from electrophysiology, to neuroimaging, to behavior. By demonstrating the commonality among various statistical approaches the authors provide the crucial tools for gaining knowledge from diverse types of data. Aimed at experimentalists with only high-school level mathematics, as well as computationally-oriented neuroscientists who have limited familiarity with statistics, Analysis of Neural Data serves as both a self-contained introduction and a reference work.

  13. Neural networks for triggering

    International Nuclear Information System (INIS)

    Denby, B.; Campbell, M.; Bedeschi, F.; Chriss, N.; Bowers, C.; Nesti, F.

    1990-01-01

    Two types of neural network beauty trigger architectures, based on identification of electrons in jets and recognition of secondary vertices, have been simulated in the environment of the Fermilab CDF experiment. The efficiencies for B's and rejection of background obtained are encouraging. If hardware tests are successful, the electron identification architecture will be tested in the 1991 run of CDF. 10 refs., 5 figs., 1 tab

  14. Artificial neural network modelling

    CERN Document Server

    Samarasinghe, Sandhya

    2016-01-01

    This book covers theoretical aspects as well as recent innovative applications of Artificial Neural networks (ANNs) in natural, environmental, biological, social, industrial and automated systems. It presents recent results of ANNs in modelling small, large and complex systems under three categories, namely, 1) Networks, Structure Optimisation, Robustness and Stochasticity 2) Advances in Modelling Biological and Environmental Systems and 3) Advances in Modelling Social and Economic Systems. The book aims at serving undergraduates, postgraduates and researchers in ANN computational modelling. .

  15. Neural Mechanisms of Foraging

    OpenAIRE

    Kolling, Nils; Behrens, Timothy EJ; Mars, Rogier B; Rushworth, Matthew FS

    2012-01-01

    Behavioural economic studies, involving limited numbers of choices, have provided key insights into neural decision-making mechanisms. By contrast, animals’ foraging choices arise in the context of sequences of encounters with prey/food. On each encounter the animal chooses to engage or whether the environment is sufficiently rich that searching elsewhere is merited. The cost of foraging is also critical. We demonstrate humans can alternate between two modes of choice, comparative decision-ma...

  16. Noise-enhanced categorization in a recurrently reconnected neural network

    International Nuclear Information System (INIS)

    Monterola, Christopher; Zapotocky, Martin

    2005-01-01

    We investigate the interplay of recurrence and noise in neural networks trained to categorize spatial patterns of neural activity. We develop the following procedure to demonstrate how, in the presence of noise, the introduction of recurrence permits to significantly extend and homogenize the operating range of a feed-forward neural network. We first train a two-level perceptron in the absence of noise. Following training, we identify the input and output units of the feed-forward network, and thus convert it into a two-layer recurrent network. We show that the performance of the reconnected network has features reminiscent of nondynamic stochastic resonance: the addition of noise enables the network to correctly categorize stimuli of subthreshold strength, with optimal noise magnitude significantly exceeding the stimulus strength. We characterize the dynamics leading to this effect and contrast it to the behavior of a more simple associative memory network in which noise-mediated categorization fails

  17. Noise-enhanced categorization in a recurrently reconnected neural network

    Science.gov (United States)

    Monterola, Christopher; Zapotocky, Martin

    2005-03-01

    We investigate the interplay of recurrence and noise in neural networks trained to categorize spatial patterns of neural activity. We develop the following procedure to demonstrate how, in the presence of noise, the introduction of recurrence permits to significantly extend and homogenize the operating range of a feed-forward neural network. We first train a two-level perceptron in the absence of noise. Following training, we identify the input and output units of the feed-forward network, and thus convert it into a two-layer recurrent network. We show that the performance of the reconnected network has features reminiscent of nondynamic stochastic resonance: the addition of noise enables the network to correctly categorize stimuli of subthreshold strength, with optimal noise magnitude significantly exceeding the stimulus strength. We characterize the dynamics leading to this effect and contrast it to the behavior of a more simple associative memory network in which noise-mediated categorization fails.

  18. Statistical modelling of neural networks in γ-spectrometry applications

    International Nuclear Information System (INIS)

    Vigneron, V.; Martinez, J.M.; Morel, J.; Lepy, M.C.

    1995-01-01

    Layered Neural Networks, which are a class of models based on neural computation, are applied to the measurement of uranium enrichment, i.e. the isotope ratio 235 U/( 235 U + 236 U + 238 U). The usual method consider a limited number of Γ-ray and X-ray peaks, and require previously calibrated instrumentation for each sample. But, in practice, the source-detector ensemble geometry conditions are critically different, thus a means of improving the above convention methods is to reduce the region of interest: this is possible by focusing on the K α X region where the three elementary components are present. Real data are used to study the performance of neural networks. Training is done with a Maximum Likelihood method to measure uranium 235 U and 238 U quantities in infinitely thick samples. (authors). 18 refs., 6 figs., 3 tabs

  19. Forecasting Flare Activity Using Deep Convolutional Neural Networks

    Science.gov (United States)

    Hernandez, T.

    2017-12-01

    Current operational flare forecasting relies on human morphological analysis of active regions and the persistence of solar flare activity through time (i.e. that the Sun will continue to do what it is doing right now: flaring or remaining calm). In this talk we present the results of applying deep Convolutional Neural Networks (CNNs) to the problem of solar flare forecasting. CNNs operate by training a set of tunable spatial filters that, in combination with neural layer interconnectivity, allow CNNs to automatically identify significant spatial structures predictive for classification and regression problems. We will start by discussing the applicability and success rate of the approach, the advantages it has over non-automated forecasts, and how mining our trained neural network provides a fresh look into the mechanisms behind magnetic energy storage and release.

  20. HIV lipodystrophy case definition using artificial neural network modelling

    DEFF Research Database (Denmark)

    Ioannidis, John P A; Trikalinos, Thomas A; Law, Matthew

    2003-01-01

    OBJECTIVE: A case definition of HIV lipodystrophy has recently been developed from a combination of clinical, metabolic and imaging/body composition variables using logistic regression methods. We aimed to evaluate whether artificial neural networks could improve the diagnostic accuracy. METHODS......: The database of the case-control Lipodystrophy Case Definition Study was split into 504 subjects (265 with and 239 without lipodystrophy) used for training and 284 independent subjects (152 with and 132 without lipodystrophy) used for validation. Back-propagation neural networks with one or two middle layers...... were trained and validated. Results were compared against logistic regression models using the same information. RESULTS: Neural networks using clinical variables only (41 items) achieved consistently superior performance than logistic regression in terms of specificity, overall accuracy and area under...

  1. Diagnosis method utilizing neural networks

    International Nuclear Information System (INIS)

    Watanabe, K.; Tamayama, K.

    1990-01-01

    Studies have been made on the technique of neural networks, which will be used to identify a cause of a small anomalous state in the reactor coolant system of the ATR (Advance Thermal Reactor). Three phases of analyses were carried out in this study. First, simulation for 100 seconds was made to determine how the plant parameters respond after the occurence of a transient decrease in reactivity, flow rate and temperature of feed water and increase in the steam flow rate and steam pressure, which would produce a decrease of water level in a steam drum of the ATR. Next, the simulation data was analysed utilizing an autoregressive model. From this analysis, a total of 36 coherency functions up to 0.5 Hz in each transient were computed among nine important and detectable plant parameters: neutron flux, flow rate of coolant, steam or feed water, water level in the steam drum, pressure and opening area of control valve in a steam pipe, feed water temperature and electrical power. Last, learning of neural networks composed of 96 input, 4-9 hidden and 5 output layer units was done by use of the generalized delta rule, namely a back-propagation algorithm. These convergent computations were continued as far as the difference between the desired outputs, 1 for direct cause or 0 for four other ones and actual outputs reached less than 10%. (1) Coherency functions were not governed by decreasing rate of reactivity in the range of 0.41x10 -2 dollar/s to 1.62x10 -2 dollar /s or by decreasing depth of the feed water temperature in the range of 3 deg C to 10 deg C or by a change of 10% or less in the three other causes. Change in coherency functions only depended on the type of cause. (2) The direct cause from the other four ones could be discriminated with 0.94+-0.01 of output level. A maximum of 0.06 output height was found among the other four causes. (3) Calculation load which is represented as products of learning times and numbers of the hidden units did not depend on the

  2. Neural Based Orthogonal Data Fitting The EXIN Neural Networks

    CERN Document Server

    Cirrincione, Giansalvo

    2008-01-01

    Written by three leaders in the field of neural based algorithms, Neural Based Orthogonal Data Fitting proposes several neural networks, all endowed with a complete theory which not only explains their behavior, but also compares them with the existing neural and traditional algorithms. The algorithms are studied from different points of view, including: as a differential geometry problem, as a dynamic problem, as a stochastic problem, and as a numerical problem. All algorithms have also been analyzed on real time problems (large dimensional data matrices) and have shown accurate solutions. Wh

  3. Improved algorithms for circuit fault diagnosis based on wavelet packet and neural network

    International Nuclear Information System (INIS)

    Zhang, W-Q; Xu, C

    2008-01-01

    In this paper, two improved BP neural network algorithms of fault diagnosis for analog circuit are presented through using optimal wavelet packet transform(OWPT) or incomplete wavelet packet transform(IWPT) as preprocessor. The purpose of preprocessing is to reduce the nodes in input layer and hidden layer of BP neural network, so that the neural network gains faster training and convergence speed. At first, we apply OWPT or IWPT to the response signal of circuit under test(CUT), and then calculate the normalization energy of each frequency band. The normalization energy is used to train the BP neural network to diagnose faulty components in the analog circuit. These two algorithms need small network size, while have faster learning and convergence speed. Finally, simulation results illustrate the two algorithms are effective for fault diagnosis

  4. Results from a MA16-based neural trigger in an experiment looking for beauty

    International Nuclear Information System (INIS)

    Baldanza, C.; Beichter, J.; Bisi, F.; Bruels, N.; Bruschini, C.; Cotta-Ramusino, A.; D'Antone, I.; Malferrari, L.; Mazzanti, P.; Musico, P.; Novelli, P.; Odorici, F.; Odorico, R.; Passaseo, M.; Zuffa, M.

    1996-01-01

    Results from a neural-network trigger based on the digital MA16 chip of Siemens are reported. The neural trigger has been applied to data from the WA92 experiment, looking for beauty particles, which have been collected during a run in which a neural trigger module based on Intel's analog neural chip ETANN operated, as already reported. The MA16 board hosting the chip has a 16-bit I/O precision and a 53-bit precision for internal calculations. It operated at 50 MHz, yielding a response time for a 16 input-variable net of 3 μs for a Fisher discriminant (1-layer net) and of 6 μs for a 2-layer net. Results are compared with those previously obtained with the ETANN trigger. (orig.)

  5. Results from a MA16-based neural trigger in an experiment looking for beauty

    Energy Technology Data Exchange (ETDEWEB)

    Baldanza, C. [Istituto Nazionale di Fisica Nucleare, Bologna (Italy); Beichter, J. [Siemens AG, ZFE T ME2, 81730 Munich (Germany); Bisi, F. [Istituto Nazionale di Fisica Nucleare, Bologna (Italy); Bruels, N. [Siemens AG, ZFE T ME2, 81730 Munich (Germany); Bruschini, C. [INFN/Genoa, Via Dodecaneso 33, 16146 Genoa (Italy); Cotta-Ramusino, A. [Istituto Nazionale di Fisica Nucleare, Bologna (Italy); D`Antone, I. [Istituto Nazionale di Fisica Nucleare, Bologna (Italy); Malferrari, L. [Istituto Nazionale di Fisica Nucleare, Bologna (Italy); Mazzanti, P. [Istituto Nazionale di Fisica Nucleare, Bologna (Italy); Musico, P. [INFN/Genoa, Via Dodecaneso 33, 16146 Genoa (Italy); Novelli, P. [INFN/Genoa, Via Dodecaneso 33, 16146 Genoa (Italy); Odorici, F. [Istituto Nazionale di Fisica Nucleare, Bologna (Italy); Odorico, R. [Istituto Nazionale di Fisica Nucleare, Bologna (Italy); Passaseo, M. [CERN, 1211 Geneva 23 (Switzerland); Zuffa, M. [Istituto Nazionale di Fisica Nucleare, Bologna (Italy)

    1996-07-11

    Results from a neural-network trigger based on the digital MA16 chip of Siemens are reported. The neural trigger has been applied to data from the WA92 experiment, looking for beauty particles, which have been collected during a run in which a neural trigger module based on Intel`s analog neural chip ETANN operated, as already reported. The MA16 board hosting the chip has a 16-bit I/O precision and a 53-bit precision for internal calculations. It operated at 50 MHz, yielding a response time for a 16 input-variable net of 3 {mu}s for a Fisher discriminant (1-layer net) and of 6 {mu}s for a 2-layer net. Results are compared with those previously obtained with the ETANN trigger. (orig.).

  6. Resolution of Singularities Introduced by Hierarchical Structure in Deep Neural Networks.

    Science.gov (United States)

    Nitta, Tohru

    2017-10-01

    We present a theoretical analysis of singular points of artificial deep neural networks, resulting in providing deep neural network models having no critical points introduced by a hierarchical structure. It is considered that such deep neural network models have good nature for gradient-based optimization. First, we show that there exist a large number of critical points introduced by a hierarchical structure in deep neural networks as straight lines, depending on the number of hidden layers and the number of hidden neurons. Second, we derive a sufficient condition for deep neural networks having no critical points introduced by a hierarchical structure, which can be applied to general deep neural networks. It is also shown that the existence of critical points introduced by a hierarchical structure is determined by the rank and the regularity of weight matrices for a specific class of deep neural networks. Finally, two kinds of implementation methods of the sufficient conditions to have no critical points are provided. One is a learning algorithm that can avoid critical points introduced by the hierarchical structure during learning (called avoidant learning algorithm). The other is a neural network that does not have some critical points introduced by the hierarchical structure as an inherent property (called avoidant neural network).

  7. Trimaran Resistance Artificial Neural Network

    Science.gov (United States)

    2011-01-01

    11th International Conference on Fast Sea Transportation FAST 2011, Honolulu, Hawaii, USA, September 2011 Trimaran Resistance Artificial Neural Network Richard...Trimaran Resistance Artificial Neural Network 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e... Artificial Neural Network and is restricted to the center and side-hull configurations tested. The value in the parametric model is that it is able to

  8. Application of Artificial Neural Networks for estimating index floods

    Science.gov (United States)

    Šimor, Viliam; Hlavčová, Kamila; Kohnová, Silvia; Szolgay, Ján

    2012-12-01

    This article presents an application of Artificial Neural Networks (ANNs) and multiple regression models for estimating mean annual maximum discharge (index flood) at ungauged sites. Both approaches were tested for 145 small basins in Slovakia in areas ranging from 20 to 300 km2. Using the objective clustering method, the catchments were divided into ten homogeneous pooling groups; for each pooling group, mutually independent predictors (catchment characteristics) were selected for both models. The neural network was applied as a simple multilayer perceptron with one hidden layer and with a back propagation learning algorithm. Hyperbolic tangents were used as an activation function in the hidden layer. Estimating index floods by the multiple regression models were based on deriving relationships between the index floods and catchment predictors. The efficiencies of both approaches were tested by the Nash-Sutcliffe and a correlation coefficients. The results showed the comparative applicability of both models with slightly better results for the index floods achieved using the ANNs methodology.

  9. Goal-seeking neural net for recall and recognition

    Science.gov (United States)

    Omidvar, Omid M.

    1990-07-01

    Neural networks have been used to mimic cognitive processes which take place in animal brains. The learning capability inherent in neural networks makes them suitable candidates for adaptive tasks such as recall and recognition. The synaptic reinforcements create a proper condition for adaptation, which results in memorization, formation of perception, and higher order information processing activities. In this research a model of a goal seeking neural network is studied and the operation of the network with regard to recall and recognition is analyzed. In these analyses recall is defined as retrieval of stored information where little or no matching is involved. On the other hand recognition is recall with matching; therefore it involves memorizing a piece of information with complete presentation. This research takes the generalized view of reinforcement in which all the signals are potential reinforcers. The neuronal response is considered to be the source of the reinforcement. This local approach to adaptation leads to the goal seeking nature of the neurons as network components. In the proposed model all the synaptic strengths are reinforced in parallel while the reinforcement among the layers is done in a distributed fashion and pipeline mode from the last layer inward. A model of complex neuron with varying threshold is developed to account for inhibitory and excitatory behavior of real neuron. A goal seeking model of a neural network is presented. This network is utilized to perform recall and recognition tasks. The performance of the model with regard to the assigned tasks is presented.

  10. Using neural networks for prediction of nuclear parameters

    Energy Technology Data Exchange (ETDEWEB)

    Pereira Filho, Leonidas; Souto, Kelling Cabral, E-mail: leonidasmilenium@hotmail.com, E-mail: kcsouto@bol.com.br [Instituto Federal de Educacao, Ciencia e Tecnologia do Rio de Janeiro (IFRJ), Rio de Janeiro, RJ (Brazil); Machado, Marcelo Dornellas, E-mail: dornemd@eletronuclear.gov.br [Eletrobras Termonuclear S.A. (GCN.T/ELETRONUCLEAR), Rio de Janeiro, RJ (Brazil). Gerencia de Combustivel Nuclear

    2013-07-01

    Dating from 1943, the earliest work on artificial neural networks (ANN), when Warren Mc Cullock and Walter Pitts developed a study on the behavior of the biological neuron, with the goal of creating a mathematical model. Some other work was done until after the 80 witnessed an explosion of interest in ANNs, mainly due to advances in technology, especially microelectronics. Because ANNs are able to solve many problems such as approximation, classification, categorization, prediction and others, they have numerous applications in various areas, including nuclear. Nodal method is adopted as a tool for analyzing core parameters such as boron concentration and pin power peaks for pressurized water reactors. However, this method is extremely slow when it is necessary to perform various core evaluations, for example core reloading optimization. To overcome this difficulty, in this paper a model of Multi-layer Perceptron (MLP) artificial neural network type backpropagation will be trained to predict these values. The main objective of this work is the development of Multi-layer Perceptron (MLP) artificial neural network capable to predict, in very short time, with good accuracy, two important parameters used in the core reloading problem - Boron Concentration and Power Peaking Factor. For the training of the neural networks are provided loading patterns and nuclear data used in cycle 19 of Angra 1 nuclear power plant. Three models of networks are constructed using the same input data and providing the following outputs: 1- Boron Concentration and Power Peaking Factor, 2 - Boron Concentration and 3 - Power Peaking Factor. (author)

  11. Using neural networks for prediction of nuclear parameters

    International Nuclear Information System (INIS)

    Pereira Filho, Leonidas; Souto, Kelling Cabral; Machado, Marcelo Dornellas

    2013-01-01

    Dating from 1943, the earliest work on artificial neural networks (ANN), when Warren Mc Cullock and Walter Pitts developed a study on the behavior of the biological neuron, with the goal of creating a mathematical model. Some other work was done until after the 80 witnessed an explosion of interest in ANNs, mainly due to advances in technology, especially microelectronics. Because ANNs are able to solve many problems such as approximation, classification, categorization, prediction and others, they have numerous applications in various areas, including nuclear. Nodal method is adopted as a tool for analyzing core parameters such as boron concentration and pin power peaks for pressurized water reactors. However, this method is extremely slow when it is necessary to perform various core evaluations, for example core reloading optimization. To overcome this difficulty, in this paper a model of Multi-layer Perceptron (MLP) artificial neural network type backpropagation will be trained to predict these values. The main objective of this work is the development of Multi-layer Perceptron (MLP) artificial neural network capable to predict, in very short time, with good accuracy, two important parameters used in the core reloading problem - Boron Concentration and Power Peaking Factor. For the training of the neural networks are provided loading patterns and nuclear data used in cycle 19 of Angra 1 nuclear power plant. Three models of networks are constructed using the same input data and providing the following outputs: 1- Boron Concentration and Power Peaking Factor, 2 - Boron Concentration and 3 - Power Peaking Factor. (author)

  12. THE USE OF NEURAL NETWORK TECHNOLOGY TO MODEL SWIMMING PERFORMANCE

    Directory of Open Access Journals (Sweden)

    António José Silva

    2007-03-01

    Full Text Available The aims of the present study were: to identify the factors which are able to explain the performance in the 200 meters individual medley and 400 meters front crawl events in young swimmers, to model the performance in those events using non-linear mathematic methods through artificial neural networks (multi-layer perceptrons and to assess the neural network models precision to predict the performance. A sample of 138 young swimmers (65 males and 73 females of national level was submitted to a test battery comprising four different domains: kinanthropometric evaluation, dry land functional evaluation (strength and flexibility, swimming functional evaluation (hydrodynamics, hydrostatic and bioenergetics characteristics and swimming technique evaluation. To establish a profile of the young swimmer non-linear combinations between preponderant variables for each gender and swim performance in the 200 meters medley and 400 meters font crawl events were developed. For this purpose a feed forward neural network was used (Multilayer Perceptron with three neurons in a single hidden layer. The prognosis precision of the model (error lower than 0.8% between true and estimated performances is supported by recent evidence. Therefore, we consider that the neural network tool can be a good approach in the resolution of complex problems such as performance modeling and the talent identification in swimming and, possibly, in a wide variety of sports

  13. A neural network model of ventriloquism effect and aftereffect.

    Science.gov (United States)

    Magosso, Elisa; Cuppini, Cristiano; Ursino, Mauro

    2012-01-01

    Presenting simultaneous but spatially discrepant visual and auditory stimuli induces a perceptual translocation of the sound towards the visual input, the ventriloquism effect. General explanation is that vision tends to dominate over audition because of its higher spatial reliability. The underlying neural mechanisms remain unclear. We address this question via a biologically inspired neural network. The model contains two layers of unimodal visual and auditory neurons, with visual neurons having higher spatial resolution than auditory ones. Neurons within each layer communicate via lateral intra-layer synapses; neurons across layers are connected via inter-layer connections. The network accounts for the ventriloquism effect, ascribing it to a positive feedback between the visual and auditory neurons, triggered by residual auditory activity at the position of the visual stimulus. Main results are: i) the less localized stimulus is strongly biased toward the most localized stimulus and not vice versa; ii) amount of the ventriloquism effect changes with visual-auditory spatial disparity; iii) ventriloquism is a robust behavior of the network with respect to parameter value changes. Moreover, the model implements Hebbian rules for potentiation and depression of lateral synapses, to explain ventriloquism aftereffect (that is, the enduring sound shift after exposure to spatially disparate audio-visual stimuli). By adaptively changing the weights of lateral synapses during cross-modal stimulation, the model produces post-adaptive shifts of auditory localization that agree with in-vivo observations. The model demonstrates that two unimodal layers reciprocally interconnected may explain ventriloquism effect and aftereffect, even without the presence of any convergent multimodal area. The proposed study may provide advancement in understanding neural architecture and mechanisms at the basis of visual-auditory integration in the spatial realm.

  14. A neural network model of ventriloquism effect and aftereffect.

    Directory of Open Access Journals (Sweden)

    Elisa Magosso

    Full Text Available Presenting simultaneous but spatially discrepant visual and auditory stimuli induces a perceptual translocation of the sound towards the visual input, the ventriloquism effect. General explanation is that vision tends to dominate over audition because of its higher spatial reliability. The underlying neural mechanisms remain unclear. We address this question via a biologically inspired neural network. The model contains two layers of unimodal visual and auditory neurons, with visual neurons having higher spatial resolution than auditory ones. Neurons within each layer communicate via lateral intra-layer synapses; neurons across layers are connected via inter-layer connections. The network accounts for the ventriloquism effect, ascribing it to a positive feedback between the visual and auditory neurons, triggered by residual auditory activity at the position of the visual stimulus. Main results are: i the less localized stimulus is strongly biased toward the most localized stimulus and not vice versa; ii amount of the ventriloquism effect changes with visual-auditory spatial disparity; iii ventriloquism is a robust behavior of the network with respect to parameter value changes. Moreover, the model implements Hebbian rules for potentiation and depression of lateral synapses, to explain ventriloquism aftereffect (that is, the enduring sound shift after exposure to spatially disparate audio-visual stimuli. By adaptively changing the weights of lateral synapses during cross-modal stimulation, the model produces post-adaptive shifts of auditory localization that agree with in-vivo observations. The model demonstrates that two unimodal layers reciprocally interconnected may explain ventriloquism effect and aftereffect, even without the presence of any convergent multimodal area. The proposed study may provide advancement in understanding neural architecture and mechanisms at the basis of visual-auditory integration in the spatial realm.

  15. Application of neural networks and its prospect. 1. General comment on application to nuclear fusion and plasma researches

    International Nuclear Information System (INIS)

    Takeda, Tatsuoki

    2006-01-01

    The back ground of application of neutral networks to R and D of scientific field and increasing of application fields are stated. A definition of neural networks, the kinds of neural networks and functions, error back propagation, and generalization are explained. An application of multi-layer neural networks to nuclear fusion and plasma researches are described by inverse problem, interpolation, time series prediction, and computerized tomography. Some examples of researches such as MHD of plasma from magnetic probe data of fusion reactor systems, parameter prediction of distribution of the impurity spectra and the charge exchange neutral particle energy spectra, disruption prediction, and residual minimization training neural network are commented. (S.Y.)

  16. The use of global image characteristics for neural network pattern recognitions

    Science.gov (United States)

    Kulyas, Maksim O.; Kulyas, Oleg L.; Loshkarev, Aleksey S.

    2017-04-01

    The recognition system is observed, where the information is transferred by images of symbols generated by a television camera. For descriptors of objects the coefficients of two-dimensional Fourier transformation generated in a special way. For solution of the task of classification the one-layer neural network trained on reference images is used. Fast learning of a neural network with a single neuron calculation of coefficients is applied.

  17. Potential usefulness of an artificial neural network for assessing ventricular size

    International Nuclear Information System (INIS)

    Fukuda, Haruyuki; Nakajima, Hideyuki; Usuki, Noriaki; Saiwai, Shigeo; Miyamoto, Takeshi; Inoue, Yuichi; Onoyama, Yasuto.

    1995-01-01

    An artificial neural network approach was applied to assess ventricular size from computed tomograms. Three layer, feed-forward neural networks with a back propagation algorithm were designed to distinguish between three degree of enlargement of the ventricles on the basis of patient's age and six items of computed tomographic information. Data for training and testing the neural network were created with computed tomograms of the brains selected at random from daily examinations. Four radiologists decided by mutual consent subjectively based on their experience whether the ventricles were within normal limits, slightly enlarged, or enlarged for the patient's age. The data for training was obtained from 38 patients. The data for testing was obtained from 47 other patients. The performance of the neural network trained using the data for training was evaluated by the rate of correct answers to the data for testing. The valid solution ratio to response of the test data obtained from the trained neural networks was more than 90% for all conditions in this study. The solutions were completely valid in the neural networks with two or three units at the hidden layer with 2,200 learning iterations, and with two units at the hidden layer with 11,000 learning iterations. The squared error decreased remarkably in the range from 0 to 500 learning iterations, and was close to a contrast over two thousand learning iterations. The neural network with a hidden layer having two or three units showed high decision performance. The preliminary results strongly suggest that the neural network approach has potential utility in computer-aided estimation of enlargement of the ventricles. (author)

  18. Cyclone track forecasting based on satellite images using artificial neural networks

    OpenAIRE

    Kovordanyi, Rita; Roy, Chandan

    2009-01-01

    Many places around the world are exposed to tropical cyclones and associated storm surges. In spite of massive efforts, a great number of people die each year as a result of cyclone events. To mitigate this damage, improved forecasting techniques must be developed. The technique presented here uses artificial neural networks to interpret NOAA-AVHRR satellite images. A multi-layer neural network, resembling the human visual system, was trained to forecast the movement of cyclones based on sate...

  19. Artificial Neural Networks for Nonlinear Dynamic Response Simulation in Mechanical Systems

    DEFF Research Database (Denmark)

    Christiansen, Niels Hørbye; Høgsberg, Jan Becker; Winther, Ole

    2011-01-01

    It is shown how artificial neural networks can be trained to predict dynamic response of a simple nonlinear structure. Data generated using a nonlinear finite element model of a simplified wind turbine is used to train a one layer artificial neural network. When trained properly the network is ab...... to perform accurate response prediction much faster than the corresponding finite element model. Initial result indicate a reduction in cpu time by two orders of magnitude....

  20. PERFORMANCE EVALUATION OF VARIANCES IN BACKPROPAGATION NEURAL NETWORK USED FOR HANDWRITTEN CHARACTER RECOGNITION

    OpenAIRE

    Vairaprakash Gurusamy *1 & K.Nandhini2

    2017-01-01

    A Neural Network is a powerful data modeling tool that is able to capture and represent complex input/output relationships. The motivation for the development of neural network technology stemmed from the desire to develop an artificial system that could perform "intelligent" tasks similar to those performed by the human brain.Back propagation was created by generalizing the Widrow-Hoff learning rule to multiple-layer networks and nonlinear differentiable transfer functions. The term back pro...

  1. Adaptive Learning Rule for Hardware-based Deep Neural Networks Using Electronic Synapse Devices

    OpenAIRE

    Lim, Suhwan; Bae, Jong-Ho; Eum, Jai-Ho; Lee, Sungtae; Kim, Chul-Heung; Kwon, Dongseok; Park, Byung-Gook; Lee, Jong-Ho

    2017-01-01

    In this paper, we propose a learning rule based on a back-propagation (BP) algorithm that can be applied to a hardware-based deep neural network (HW-DNN) using electronic devices that exhibit discrete and limited conductance characteristics. This adaptive learning rule, which enables forward, backward propagation, as well as weight updates in hardware, is helpful during the implementation of power-efficient and high-speed deep neural networks. In simulations using a three-layer perceptron net...

  2. ReSeg: A Recurrent Neural Network-Based Model for Semantic Segmentation

    OpenAIRE

    Visin, Francesco; Ciccone, Marco; Romero, Adriana; Kastner, Kyle; Cho, Kyunghyun; Bengio, Yoshua; Matteucci, Matteo; Courville, Aaron

    2015-01-01

    We propose a structured prediction architecture, which exploits the local generic features extracted by Convolutional Neural Networks and the capacity of Recurrent Neural Networks (RNN) to retrieve distant dependencies. The proposed architecture, called ReSeg, is based on the recently introduced ReNet model for image classification. We modify and extend it to perform the more challenging task of semantic segmentation. Each ReNet layer is composed of four RNN that sweep the image horizontally ...

  3. Artificial Neural Network applied to lightning flashes

    Science.gov (United States)

    Gin, R. B.; Guedes, D.; Bianchi, R.

    2013-05-01

    The development of video cameras enabled cientists to study lightning discharges comportment with more precision. The main goal of this project is to create a system able to detect images of lightning discharges stored in videos and classify them using an Artificial Neural Network (ANN)using C Language and OpenCV libraries. The developed system, can be split in two different modules: detection module and classification module. The detection module uses OpenCV`s computer vision libraries and image processing techniques to detect if there are significant differences between frames in a sequence, indicating that something, still not classified, occurred. Whenever there is a significant difference between two consecutive frames, two main algorithms are used to analyze the frame image: brightness and shape algorithms. These algorithms detect both shape and brightness of the event, removing irrelevant events like birds, as well as detecting the relevant events exact position, allowing the system to track it over time. The classification module uses a neural network to classify the relevant events as horizontal or vertical lightning, save the event`s images and calculates his number of discharges. The Neural Network was implemented using the backpropagation algorithm, and was trained with 42 training images , containing 57 lightning events (one image can have more than one lightning). TheANN was tested with one to five hidden layers, with up to 50 neurons each. The best configuration achieved a success rate of 95%, with one layer containing 20 neurons (33 test images with 42 events were used in this phase). This configuration was implemented in the developed system to analyze 20 video files, containing 63 lightning discharges previously manually detected. Results showed that all the lightning discharges were detected, many irrelevant events were unconsidered, and the event's number of discharges was correctly computed. The neural network used in this project achieved a

  4. Evaluation of the Performance of Feedforward and Recurrent Neural Networks in Active Cancellation of Sound Noise

    Directory of Open Access Journals (Sweden)

    Mehrshad Salmasi

    2012-07-01

    Full Text Available Active noise control is based on the destructive interference between the primary noise and generated noise from the secondary source. An antinoise of equal amplitude and opposite phase is generated and combined with the primary noise. In this paper, performance of the neural networks is evaluated in active cancellation of sound noise. For this reason, feedforward and recurrent neural networks are designed and trained. After training, performance of the feedforwrad and recurrent networks in noise attenuation are compared. We use Elman network as a recurrent neural network. For simulations, noise signals from a SPIB database are used. In order to compare the networks appropriately, equal number of layers and neurons are considered for the networks. Moreover, training and test samples are similar. Simulation results show that feedforward and recurrent neural networks present good performance in noise cancellation. As it is seen, the ability of recurrent neural network in noise attenuation is better than feedforward network.

  5. Reynolds averaged turbulence modelling using deep neural networks with embedded invariance

    International Nuclear Information System (INIS)

    Ling, Julia; Kurzawski, Andrew; Templeton, Jeremy

    2016-01-01

    There exists significant demand for improved Reynolds-averaged Navier–Stokes (RANS) turbulence models that are informed by and can represent a richer set of turbulence physics. This paper presents a method of using deep neural networks to learn a model for the Reynolds stress anisotropy tensor from high-fidelity simulation data. A novel neural network architecture is proposed which uses a multiplicative layer with an invariant tensor basis to embed Galilean invariance into the predicted anisotropy tensor. It is demonstrated that this neural network architecture provides improved prediction accuracy compared with a generic neural network architecture that does not embed this invariance property. Furthermore, the Reynolds stress anisotropy predictions of this invariant neural network are propagated through to the velocity field for two test cases. For both test cases, significant improvement versus baseline RANS linear eddy viscosity and nonlinear eddy viscosity models is demonstrated.

  6. Nuclear power plant monitoring method by neural network and its application to actual nuclear reactor

    International Nuclear Information System (INIS)

    Nabeshima, Kunihiko; Suzuki, Katsuo; Shinohara, Yoshikuni; Tuerkcan, E.

    1995-11-01

    In this paper, the anomaly detection method for nuclear power plant monitoring and its program are described by using a neural network approach, which is based on the deviation between measured signals and output signals of neural network model. The neural network used in this study has three layered auto-associative network with 12 input/output, and backpropagation algorithm is adopted for learning. Furthermore, to obtain better dynamical model of the reactor plant, a new learning technique was developed in which the learning process of the present neural network is divided into initial and adaptive learning modes. The test results at the actual nuclear reactor shows that the neural network plant monitoring system is successfull in detecting in real-time the symptom of small anomaly over a wide power range including reactor start-up, shut-down and stationary operation. (author)

  7. Radial basis function neural network for power system load-flow

    International Nuclear Information System (INIS)

    Karami, A.; Mohammadi, M.S.

    2008-01-01

    This paper presents a method for solving the load-flow problem of the electric power systems using radial basis function (RBF) neural network with a fast hybrid training method. The main idea is that some operating conditions (values) are needed to solve the set of non-linear algebraic equations of load-flow by employing an iterative numerical technique. Therefore, we may view the outputs of a load-flow program as functions of the operating conditions. Indeed, we are faced with a function approximation problem and this can be done by an RBF neural network. The proposed approach has been successfully applied to the 10-machine and 39-bus New England test system. In addition, this method has been compared with that of a multi-layer perceptron (MLP) neural network model. The simulation results show that the RBF neural network is a simpler method to implement and requires less training time to converge than the MLP neural network. (author)

  8. Terrain Mapping and Classification in Outdoor Environments Using Neural Networks

    OpenAIRE

    Alberto Yukinobu Hata; Denis Fernando Wolf; Gustavo Pessin; Fernando Osório

    2009-01-01

    This paper describes a three-dimensional terrain mapping and classification technique to allow the operation of mobile robots in outdoor environments using laser range finders. We propose the use of a multi-layer perceptron neural network to classify the terrain into navigable, partially navigable, and non-navigable. The maps generated by our approach can be used for path planning, navigation, and local obstacle avoidance. Experimental tests using an outdoor robot and a laser sensor demonstra...

  9. Process for forming synapses in neural networks and resistor therefor

    Science.gov (United States)

    Fu, Chi Y.

    1996-01-01

    Customizable neural network in which one or more resistors form each synapse. All the resistors in the synaptic array are identical, thus simplifying the processing issues. Highly doped, amorphous silicon is used as the resistor material, to create extremely high resistances occupying very small spaces. Connected in series with each resistor in the array is at least one severable conductor whose uppermost layer has a lower reflectivity of laser energy than typical metal conductors at a desired laser wavelength.

  10. Fastest learning in small-world neural networks

    International Nuclear Information System (INIS)

    Simard, D.; Nadeau, L.; Kroeger, H.

    2005-01-01

    We investigate supervised learning in neural networks. We consider a multi-layered feed-forward network with back propagation. We find that the network of small-world connectivity reduces the learning error and learning time when compared to the networks of regular or random connectivity. Our study has potential applications in the domain of data-mining, image processing, speech recognition, and pattern recognition

  11. Relay Backpropagation for Effective Learning of Deep Convolutional Neural Networks

    OpenAIRE

    Shen, Li; Lin, Zhouchen; Huang, Qingming

    2015-01-01

    Learning deeper convolutional neural networks becomes a tendency in recent years. However, many empirical evidences suggest that performance improvement cannot be gained by simply stacking more layers. In this paper, we consider the issue from an information theoretical perspective, and propose a novel method Relay Backpropagation, that encourages the propagation of effective information through the network in training stage. By virtue of the method, we achieved the first place in ILSVRC 2015...

  12. An automatic system for Turkish word recognition using Discrete Wavelet Neural Network based on adaptive entropy

    International Nuclear Information System (INIS)

    Avci, E.

    2007-01-01

    In this paper, an automatic system is presented for word recognition using real Turkish word signals. This paper especially deals with combination of the feature extraction and classification from real Turkish word signals. A Discrete Wavelet Neural Network (DWNN) model is used, which consists of two layers: discrete wavelet layer and multi-layer perceptron. The discrete wavelet layer is used for adaptive feature extraction in the time-frequency domain and is composed of Discrete Wavelet Transform (DWT) and wavelet entropy. The multi-layer perceptron used for classification is a feed-forward neural network. The performance of the used system is evaluated by using noisy Turkish word signals. Test results showing the effectiveness of the proposed automatic system are presented in this paper. The rate of correct recognition is about 92.5% for the sample speech signals. (author)

  13. Multilayer Neural Networks with Extensively Many Hidden Units

    International Nuclear Information System (INIS)

    Rosen-Zvi, Michal; Engel, Andreas; Kanter, Ido

    2001-01-01

    The information processing abilities of a multilayer neural network with a number of hidden units scaling as the input dimension are studied using statistical mechanics methods. The mapping from the input layer to the hidden units is performed by general symmetric Boolean functions, whereas the hidden layer is connected to the output by either discrete or continuous couplings. Introducing an overlap in the space of Boolean functions as order parameter, the storage capacity is found to scale with the logarithm of the number of implementable Boolean functions. The generalization behavior is smooth for continuous couplings and shows a discontinuous transition to perfect generalization for discrete ones

  14. Identification of generalized state transfer matrix using neural networks

    International Nuclear Information System (INIS)

    Zhu Changchun

    2001-01-01

    The research is introduced on identification of generalized state transfer matrix of linear time-invariant (LTI) system by use of neural networks based on LM (Levenberg-Marquart) algorithm. Firstly, the generalized state transfer matrix is defined. The relationship between the identification of state transfer matrix of structural dynamics and the identification of the weight matrix of neural networks has been established in theory. A singular layer neural network is adopted to obtain the structural parameters as a powerful tool that has parallel distributed processing ability and the property of adaptation or learning. The constraint condition of weight matrix of the neural network is deduced so that the learning and training of the designed network can be more effective. The identified neural network can be used to simulate the structural response excited by any other signals. In order to cope with its further application in practical problems, some noise (5% and 10%) is expected to be present in the response measurements. Results from computer simulation studies show that this method is valid and feasible

  15. Neural networks within multi-core optic fibers.

    Science.gov (United States)

    Cohen, Eyal; Malka, Dror; Shemer, Amir; Shahmoon, Asaf; Zalevsky, Zeev; London, Michael

    2016-07-07

    Hardware implementation of artificial neural networks facilitates real-time parallel processing of massive data sets. Optical neural networks offer low-volume 3D connectivity together with large bandwidth and minimal heat production in contrast to electronic implementation. Here, we present a conceptual design for in-fiber optical neural networks. Neurons and synapses are realized as individual silica cores in a multi-core fiber. Optical signals are transferred transversely between cores by means of optical coupling. Pump driven amplification in erbium-doped cores mimics synaptic interactions. We simulated three-layered feed-forward neural networks and explored their capabilities. Simulations suggest that networks can differentiate between given inputs depending on specific configurations of amplification; this implies classification and learning capabilities. Finally, we tested experimentally our basic neuronal elements using fibers, couplers, and amplifiers, and demonstrated that this configuration implements a neuron-like function. Therefore, devices similar to our proposed multi-core fiber could potentially serve as building blocks for future large-scale small-volume optical artificial neural networks.

  16. Fundamental study on the interpretation technique for 3-D MT data using neural networks. 2; Neural network wo mochiita sanjigen MT ho data kaishaku gijutsu ni kansuru kisoteki kenkyu. 2

    Energy Technology Data Exchange (ETDEWEB)

    Fukuoka, K; Kobayashi, T [OYO Corp., Tokyo (Japan); Mogi, T [Kyushu University, Fukuoka (Japan). Faculty of Engineering; Spichak, V

    1997-10-22

    Behavior of neural networks relative to noise and the constitution of an optimum network are studied for the construction of a 3-D MT data interpretation system using neural networks. In the study, the relationship is examined between the noise level of educational data and the noise level of the neural network to be constructed. After examination it is found that the neural network is effective in interpreting data whose noise level is the same as that of educational data; it cannot correctly interpret data that it has not met in the educational stage even if such data is free of noise; that the optimum number of neurons in a hidden layer is approximately 40 in a network architecture using the current system; and that the neuron gain function enhances recognition capability when a logistic function is used in the hidden layer and a linear function is used in the output layer. 2 refs., 7 figs., 2 tabs.

  17. Optics in neural computation

    Science.gov (United States)

    Levene, Michael John

    In all attempts to emulate the considerable powers of the brain, one is struck by both its immense size, parallelism, and complexity. While the fields of neural networks, artificial intelligence, and neuromorphic engineering have all attempted oversimplifications on the considerable complexity, all three can benefit from the inherent scalability and parallelism of optics. This thesis looks at specific aspects of three modes in which optics, and particularly volume holography, can play a part in neural computation. First, holography serves as the basis of highly-parallel correlators, which are the foundation of optical neural networks. The huge input capability of optical neural networks make them most useful for image processing and image recognition and tracking. These tasks benefit from the shift invariance of optical correlators. In this thesis, I analyze the capacity of correlators, and then present several techniques for controlling the amount of shift invariance. Of particular interest is the Fresnel correlator, in which the hologram is displaced from the Fourier plane. In this case, the amount of shift invariance is limited not just by the thickness of the hologram, but by the distance of the hologram from the Fourier plane. Second, volume holography can provide the huge storage capacity and high speed, parallel read-out necessary to support large artificial intelligence systems. However, previous methods for storing data in volume holograms have relied on awkward beam-steering or on as-yet non- existent cheap, wide-bandwidth, tunable laser sources. This thesis presents a new technique, shift multiplexing, which is capable of very high densities, but which has the advantage of a very simple implementation. In shift multiplexing, the reference wave consists of a focused spot a few millimeters in front of the hologram. Multiplexing is achieved by simply translating the hologram a few tens of microns or less. This thesis describes the theory for how shift

  18. Conducting polymer coated neural recording electrodes

    Science.gov (United States)

    Harris, Alexander R.; Morgan, Simeon J.; Chen, Jun; Kapsa, Robert M. I.; Wallace, Gordon G.; Paolini, Antonio G.

    2013-02-01

    Objective. Neural recording electrodes suffer from poor signal to noise ratio, charge density, biostability and biocompatibility. This paper investigates the ability of conducting polymer coated electrodes to record acute neural response in a systematic manner, allowing in depth comparison of electrochemical and electrophysiological response. Approach. Polypyrrole (Ppy) and poly-3,4-ethylenedioxythiophene (PEDOT) doped with sulphate (SO4) or para-toluene sulfonate (pTS) were used to coat iridium neural recording electrodes. Detailed electrochemical and electrophysiological investigations were undertaken to compare the effect of these materials on acute in vivo recording. Main results. A range of charge density and impedance responses were seen with each respectively doped conducting polymer. All coatings produced greater charge density than uncoated electrodes, while PEDOT-pTS, PEDOT-SO4 and Ppy-SO4 possessed lower impedance values at 1 kHz than uncoated electrodes. Charge density increased with PEDOT-pTS thickness and impedance at 1 kHz was reduced with deposition times up to 45 s. Stable electrochemical response after acute implantation inferred biostability of PEDOT-pTS coated electrodes while other electrode materials had variable impedance and/or charge density after implantation indicative of a protein fouling layer forming on the electrode surface. Recording of neural response to white noise bursts after implantation of conducting polymer-coated electrodes into a rat model inferior colliculus showed a general decrease in background noise and increase in signal to noise ratio and spike count with reduced impedance at 1 kHz, regardless of the specific electrode coating, compared to uncoated electrodes. A 45 s PEDOT-pTS deposition time yielded the highest signal to noise ratio and spike count. Significance. A method for comparing recording electrode materials has been demonstrated with doped conducting polymers. PEDOT-pTS showed remarkable low fouling during

  19. Modeling of an industrial process of pleuromutilin fermentation using feed-forward neural networks

    Directory of Open Access Journals (Sweden)

    L. Khaouane

    2013-03-01

    Full Text Available This work investigates the use of artificial neural networks in modeling an industrial fermentation process of Pleuromutilin produced by Pleurotus mutilus in a fed-batch mode. Three feed-forward neural network models characterized by a similar structure (five neurons in the input layer, one hidden layer and one neuron in the output layer are constructed and optimized with the aim to predict the evolution of three main bioprocess variables: biomass, substrate and product. Results show a good fit between the predicted and experimental values for each model (the root mean squared errors were 0.4624% - 0.1234 g/L and 0.0016 mg/g respectively. Furthermore, the comparison between the optimized models and the unstructured kinetic models in terms of simulation results shows that neural network models gave more significant results. These results encourage further studies to integrate the mathematical formulae extracted from these models into an industrial control loop of the process.

  20. A neural network model of the relativistic electron flux at geosynchronous orbit

    International Nuclear Information System (INIS)

    Koons, H.C.; Gorney, D.J.

    1991-01-01

    A neural network has been developed to model the temporal variations of relativistic (>3 MeV) electrons at geosynchronous orbit based on model inputs consisting of 10 consecutive days of the daily sum of the planetary magnetic index ΣKp. The neural network consists of three layers of neurons, containing 10 neurons in the input layer, 6 neurons in a hidden layer, and 1 output neuron. The output is a prediction of the daily-averaged electron flux for the tenth day. The neural network was trained using 62 days of data from July 1, 1984, through August 31, 1984, from the SEE spectrometer on the geosynchronous spacecraft 1982-019. The performance of the model was measured by comparing model outputs with measured fluxes over a 6-year period from April 19, 1982, to June 4, 1988. For the entire data set the rms logarithmic error of the neural network is 0.76, and the average logarithmic error is 0.58. The neural network is essentially zero biased, and for accumulation intervals of 3 days or longer the average logarithmic error is less than 0.1. The neural network provides results that are significantly more accurate than those from linear prediction filters. The model has been used to simulate conditions which are rarely observed in nature, such as long periods of quiet (ΣKp = 0) and ideal impulses. It has also been used to make reasonably accurate day-ahead forecasts of the relativistic electron flux at geosynchronous orbit

  1. A SIMULATION OF THE PENICILLIN G PRODUCTION BIOPROCESS APPLYING NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    A.J.G. da Cruz

    1997-12-01

    Full Text Available The production of penicillin G by Penicillium chrysogenum IFO 8644 was simulated employing a feedforward neural network with three layers. The neural network training procedure used an algorithm combining two procedures: random search and backpropagation. The results of this approach were very promising, and it was observed that the neural network was able to accurately describe the nonlinear behavior of the process. Besides, the results showed that this technique can be successfully applied to control process algorithms due to its long processing time and its flexibility in the incorporation of new data

  2. Cosmic-ray discrimination capabilities of DELTA E-E silicon nuclear telescopes using neural networks

    CERN Document Server

    Ambriola, M; Cafagna, F; Castellano, M; Ciacio, F; Circella, M; De Marzo, C N; Montaruli, T

    2000-01-01

    An isotope classifier of cosmic-ray events collected by space detectors has been implemented using a multi-layer perceptron neural architecture. In order to handle a great number of different isotopes a modular architecture of the 'mixture of experts' type is proposed. The performance of this classifier has been tested on simulated data and has been compared with a 'classical' classifying procedure. The quantitative comparison with traditional techniques shows that the neural approach has classification performances comparable - within 1% - with that of the classical one, with efficiency of the order of 98%. A possible hardware implementation of such a kind of neural architecture in future space missions is considered.

  3. Prediction of Industrial Electric Energy Consumption in Anhui Province Based on GA-BP Neural Network

    Science.gov (United States)

    Zhang, Jiajing; Yin, Guodong; Ni, Youcong; Chen, Jinlan

    2018-01-01

    In order to improve the prediction accuracy of industrial electrical energy consumption, a prediction model of industrial electrical energy consumption was proposed based on genetic algorithm and neural network. The model use genetic algorithm to optimize the weights and thresholds of BP neural network, and the model is used to predict the energy consumption of industrial power in Anhui Province, to improve the prediction accuracy of industrial electric energy consumption in Anhui province. By comparing experiment of GA-BP prediction model and BP neural network model, the GA-BP model is more accurate with smaller number of neurons in the hidden layer.

  4. A comparative study of multilayer perceptron neural networks for the identification of rhubarb samples.

    Science.gov (United States)

    Zhang, Zhuoyong; Wang, Yamin; Fan, Guoqiang; Harrington, Peter de B

    2007-01-01

    Artificial neural networks have gained much attention in recent years as fast and flexible methods for quality control in traditional medicine. Near-infrared (NIR) spectroscopy has become an accepted method for the qualitative and quantitative analyses of traditional Chinese medicine since it is simple, rapid, and non-destructive. The present paper describes a method by which to discriminate official and unofficial rhubarb samples using three layer perceptron neural networks applied to NIR data. Multilayer perceptron neural networks were trained with back propagation, delta-bar-delta and quick propagation algorithms. Results obtained using these methods were all satisfactory, but the best outcomes were obtained with the delta-bar-delta algorithm.

  5. Comparison between extreme learning machine and wavelet neural networks in data classification

    Science.gov (United States)

    Yahia, Siwar; Said, Salwa; Jemai, Olfa; Zaied, Mourad; Ben Amar, Chokri

    2017-03-01

    Extreme learning Machine is a well known learning algorithm in the field of machine learning. It's about a feed forward neural network with a single-hidden layer. It is an extremely fast learning algorithm with good generalization performance. In this paper, we aim to compare the Extreme learning Machine with wavelet neural networks, which is a very used algorithm. We have used six benchmark data sets to evaluate each technique. These datasets Including Wisconsin Breast Cancer, Glass Identification, Ionosphere, Pima Indians Diabetes, Wine Recognition and Iris Plant. Experimental results have shown that both extreme learning machine and wavelet neural networks have reached good results.

  6. Development and application of deep convolutional neural network in target detection

    Science.gov (United States)

    Jiang, Xiaowei; Wang, Chunping; Fu, Qiang

    2018-04-01

    With the development of big data and algorithms, deep convolution neural networks with more hidden layers have more powerful feature learning and feature expression ability than traditional machine learning methods, making artificial intelligence surpass human level in many fields. This paper first reviews the development and application of deep convolutional neural networks in the field of object detection in recent years, then briefly summarizes and ponders some existing problems in the current research, and the future development of deep convolutional neural network is prospected.

  7. Gelatin methacrylamide hydrogel with graphene nanoplatelets for neural cell-laden 3D bioprinting.

    Science.gov (United States)

    Wei Zhu; Harris, Brent T; Zhang, Lijie Grace

    2016-08-01

    Nervous system is extremely complex which leads to rare regrowth of nerves once injury or disease occurs. Advanced 3D bioprinting strategy, which could simultaneously deposit biocompatible materials, cells and supporting components in a layer-by-layer manner, may be a promising solution to address neural damages. Here we presented a printable nano-bioink composed of gelatin methacrylamide (GelMA), neural stem cells, and bioactive graphene nanoplatelets to target nerve tissue regeneration in the assist of stereolithography based 3D bioprinting technique. We found the resultant GelMA hydrogel has a higher compressive modulus with an increase of GelMA concentration. The porous GelMA hydrogel can provide a biocompatible microenvironment for the survival and growth of neural stem cells. The cells encapsulated in the hydrogel presented good cell viability at the low GelMA concentration. Printed neural construct exhibited well-defined architecture and homogenous cell distribution. In addition, neural stem cells showed neuron differentiation and neurites elongation within the printed construct after two weeks of culture. These findings indicate the 3D bioprinted neural construct has great potential for neural tissue regeneration.

  8. Application of improved PSO-RBF neural network in the synthetic ammonia decarbonization

    Directory of Open Access Journals (Sweden)

    Yongwei LI

    2017-12-01

    Full Text Available The synthetic ammonia decarbonization is a typical complex industrial process, which has the characteristics of time variation, nonlinearity and uncertainty, and the on-line control model is difficult to be established. An improved PSO-RBF neural network control algorithm is proposed to solve the problems of low precision and poor robustness in the complex process of the synthetic ammonia decarbonization. The particle swarm optimization algorithm and RBF neural network are combined. The improved particle swarm algorithm is used to optimize the RBF neural network's hidden layer primary function center, width and the output layer's connection value to construct the RBF neural network model optimized by the improved PSO algorithm. The improved PSO-RBF neural network control model is applied to the key carbonization process and compared with the traditional fuzzy neural network. The simulation results show that the improved PSO-RBF neural network control method used in the synthetic ammonia decarbonization process has higher control accuracy and system robustness, which provides an effective way to solve the modeling and optimization control of a complex industrial process.

  9. VSWI Wetlands Advisory Layer

    Data.gov (United States)

    Vermont Center for Geographic Information — This dataset represents the DEC Wetlands Program's Advisory layer. This layer makes the most up-to-date, non-jurisdictional, wetlands mapping avaiable to the public...

  10. Proposal for an All-Spin Artificial Neural Network: Emulating Neural and Synaptic Functionalities Through Domain Wall Motion in Ferromagnets.

    Science.gov (United States)

    Sengupta, Abhronil; Shim, Yong; Roy, Kaushik

    2016-12-01

    Non-Boolean computing based on emerging post-CMOS technologies can potentially pave the way for low-power neural computing platforms. However, existing work on such emerging neuromorphic architectures have either focused on solely mimicking the neuron, or the synapse functionality. While memristive devices have been proposed to emulate biological synapses, spintronic devices have proved to be efficient at performing the thresholding operation of the neuron at ultra-low currents. In this work, we propose an All-Spin Artificial Neural Network where a single spintronic device acts as the basic building block of the system. The device offers a direct mapping to synapse and neuron functionalities in the brain while inter-layer network communication is accomplished via CMOS transistors. To the best of our knowledge, this is the first demonstration of a neural architecture where a single nanoelectronic device is able to mimic both neurons and synapses. The ultra-low voltage operation of low resistance magneto-metallic neurons enables the low-voltage operation of the array of spintronic synapses, thereby leading to ultra-low power neural architectures. Device-level simulations, calibrated to experimental results, was used to drive the circuit and system level simulations of the neural network for a standard pattern recognition problem. Simulation studies indicate energy savings by  ∼  100× in comparison to a corresponding digital/analog CMOS neuron implementation.

  11. Towards dropout training for convolutional neural networks.

    Science.gov (United States)

    Wu, Haibing; Gu, Xiaodong

    2015-11-01

    Recently, dropout has seen increasing use in deep learning. For deep convolutional neural networks, dropout is known to work well in fully-connected layers. However, its effect in convolutional and pooling layers is still not clear. This paper demonstrates that max-pooling dropout is equivalent to randomly picking activation based on a multinomial distribution at training time. In light of this insight, we advocate employing our proposed probabilistic weighted pooling, instead of commonly used max-pooling, to act as model averaging at test time. Empirical evidence validates the superiority of probabilistic weighted pooling. We also empirically show that the effect of convolutional dropout is not trivial, despite the dramatically reduced possibility of over-fitting due to the convolutional architecture. Elaborately designing dropout training simultaneously in max-pooling and fully-connected layers, we achieve state-of-the-art performance on MNIST, and very competitive results on CIFAR-10 and CIFAR-100, relative to other approaches without data augmentation. Finally, we compare max-pooling dropout and stochastic pooling, both of which introduce stochasticity based on multinomial distributions at pooling stage. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Layer-by-layer cell membrane assembly

    Science.gov (United States)

    Matosevic, Sandro; Paegel, Brian M.

    2013-11-01

    Eukaryotic subcellular membrane systems, such as the nuclear envelope or endoplasmic reticulum, present a rich array of architecturally and compositionally complex supramolecular targets that are as yet inaccessible. Here we describe layer-by-layer phospholipid membrane assembly on microfluidic droplets, a route to structures with defined compositional asymmetry and lamellarity. Starting with phospholipid-stabilized water-in-oil droplets trapped in a static droplet array, lipid monolayer deposition proceeds as oil/water-phase boundaries pass over the droplets. Unilamellar vesicles assembled layer-by-layer support functional insertion both of purified and of in situ expressed membrane proteins. Synthesis and chemical probing of asymmetric unilamellar and double-bilayer vesicles demonstrate the programmability of both membrane lamellarity and lipid-leaflet composition during assembly. The immobilized vesicle arrays are a pragmatic experimental platform for biophysical studies of membranes and their associated proteins, particularly complexes that assemble and function in multilamellar contexts in vivo.

  13. Analysis of neural networks

    CERN Document Server

    Heiden, Uwe

    1980-01-01

    The purpose of this work is a unified and general treatment of activity in neural networks from a mathematical pOint of view. Possible applications of the theory presented are indica­ ted throughout the text. However, they are not explored in de­ tail for two reasons : first, the universal character of n- ral activity in nearly all animals requires some type of a general approach~ secondly, the mathematical perspicuity would suffer if too many experimental details and empirical peculiarities were interspersed among the mathematical investigation. A guide to many applications is supplied by the references concerning a variety of specific issues. Of course the theory does not aim at covering all individual problems. Moreover there are other approaches to neural network theory (see e.g. Poggio-Torre, 1978) based on the different lev­ els at which the nervous system may be viewed. The theory is a deterministic one reflecting the average be­ havior of neurons or neuron pools. In this respect the essay is writt...

  14. Neural Synchronization and Cryptography

    Science.gov (United States)

    Ruttor, Andreas

    2007-11-01

    Neural networks can synchronize by learning from each other. In the case of discrete weights full synchronization is achieved in a finite number of steps. Additional networks can be trained by using the inputs and outputs generated during this process as examples. Several learning rules for both tasks are presented and analyzed. In the case of Tree Parity Machines synchronization is much faster than learning. Scaling laws for the number of steps needed for full synchronization and successful learning are derived using analytical models. They indicate that the difference between both processes can be controlled by changing the synaptic depth. In the case of bidirectional interaction the synchronization time increases proportional to the square of this parameter, but it grows exponentially, if information is transmitted in one direction only. Because of this effect neural synchronization can be used to construct a cryptographic key-exchange protocol. Here the partners benefit from mutual interaction, so that a passive attacker is usually unable to learn the generated key in time. The success probabilities of different attack methods are determined by numerical simulations and scaling laws are derived from the data. They show that the partners can reach any desired level of security by just increasing the synaptic depth. Then the complexity of a successful attack grows exponentially, but there is only a polynomial increase of the effort needed to generate a key. Further improvements of security are possible by replacing the random inputs with queries generated by the partners.

  15. Neural Networks for Optimal Control

    DEFF Research Database (Denmark)

    Sørensen, O.

    1995-01-01

    Two neural networks are trained to act as an observer and a controller, respectively, to control a non-linear, multi-variable process.......Two neural networks are trained to act as an observer and a controller, respectively, to control a non-linear, multi-variable process....

  16. Neural networks at the Tevatron

    International Nuclear Information System (INIS)

    Badgett, W.; Burkett, K.; Campbell, M.K.; Wu, D.Y.; Bianchin, S.; DeNardi, M.; Pauletta, G.; Santi, L.; Caner, A.; Denby, B.; Haggerty, H.; Lindsey, C.S.; Wainer, N.; Dall'Agata, M.; Johns, K.; Dickson, M.; Stanco, L.; Wyss, J.L.

    1992-10-01

    This paper summarizes neural network applications at the Fermilab Tevatron, including the first online hardware application in high energy physics (muon tracking): the CDF and DO neural network triggers; offline quark/gluon discrimination at CDF; ND a new tool for top to multijets recognition at CDF

  17. Neural Networks for the Beginner.

    Science.gov (United States)

    Snyder, Robin M.

    Motivated by the brain, neural networks are a right-brained approach to artificial intelligence that is used to recognize patterns based on previous training. In practice, one would not program an expert system to recognize a pattern and one would not train a neural network to make decisions from rules; but one could combine the best features of…

  18. Neural fields theory and applications

    CERN Document Server

    Graben, Peter; Potthast, Roland; Wright, James

    2014-01-01

    With this book, the editors present the first comprehensive collection in neural field studies, authored by leading scientists in the field - among them are two of the founding-fathers of neural field theory. Up to now, research results in the field have been disseminated across a number of distinct journals from mathematics, computational neuroscience, biophysics, cognitive science and others. Starting with a tutorial for novices in neural field studies, the book comprises chapters on emergent patterns, their phase transitions and evolution, on stochastic approaches, cortical development, cognition, robotics and computation, large-scale numerical simulations, the coupling of neural fields to the electroencephalogram and phase transitions in anesthesia. The intended readership are students and scientists in applied mathematics, theoretical physics, theoretical biology, and computational neuroscience. Neural field theory and its applications have a long-standing tradition in the mathematical and computational ...

  19. Artificial neural networks in NDT

    International Nuclear Information System (INIS)

    Abdul Aziz Mohamed

    2001-01-01

    Artificial neural networks, simply known as neural networks, have attracted considerable interest in recent years largely because of a growing recognition of the potential of these computational paradigms as powerful alternative models to conventional pattern recognition or function approximation techniques. The neural networks approach is having a profound effect on almost all fields, and has been utilised in fields Where experimental inter-disciplinary work is being carried out. Being a multidisciplinary subject with a broad knowledge base, Nondestructive Testing (NDT) or Nondestructive Evaluation (NDE) is no exception. This paper explains typical applications of neural networks in NDT/NDE. Three promising types of neural networks are highlighted, namely, back-propagation, binary Hopfield and Kohonen's self-organising maps. (Author)

  20. An improved advertising CTR prediction approach based on the fuzzy deep neural network.

    Science.gov (United States)

    Jiang, Zilong; Gao, Shu; Li, Mingjiang

    2018-01-01

    Combining a deep neural network with fuzzy theory, this paper proposes an advertising click-through rate (CTR) prediction approach based on a fuzzy deep neural network (FDNN). In this approach, fuzzy Gaussian-Bernoulli restricted Boltzmann machine (FGBRBM) is first applied to input raw data from advertising datasets. Next, fuzzy restricted Boltzmann machine (FRBM) is used to construct the fuzzy deep belief network (FDBN) with the unsupervised method layer by layer. Finally, fuzzy logistic regression (FLR) is utilized for modeling the CTR. The experimental results show that the proposed FDNN model outperforms several baseline models in terms of both data representation capability and robustness in advertising click log datasets with noise.

  1. Double layers in space

    International Nuclear Information System (INIS)

    Carlqvist, P.

    1982-07-01

    For more than a decade it has been realised that electrostatic double layers are likely to occur in space. We briefly discuss the theoretical background of such double layers. Most of the paper is devoted to an account of the observational evidence for double layers in the ionosphere and magnetosphere of the Earth. Several different experiments are reviewed including rocket and satellite measurements and ground based observations. It is concluded that the observational evidence for double layers in space is very strong. The experimental results indicate that double layers with widely different properties may exist in space. (Author)

  2. Double layers in space

    International Nuclear Information System (INIS)

    Carlqvist, P.

    1982-01-01

    For more than a decade it has been realised that electrostatic double layers are likely to occur in space. The author briefly discusses the theoretical background of such double layers. Most of the paper is devoted to an account of the observational evidence for double layers in the ionosphere and magnetosphere of the Earth. Several different experiments are reviewed including rocket and satellite measurements and ground based observations. It is concluded that the observational evidence for double layers in space is very strong. The experimental results indicate that double layers with widely different properties may exist in space. (Auth.)

  3. Using Elman recurrent neural networks with conjugate gradient algorithm in determining the anesthetic the amount of anesthetic medicine to be applied.

    Science.gov (United States)

    Güntürkün, Rüştü

    2010-08-01

    In this study, Elman recurrent neural networks have been defined by using conjugate gradient algorithm in order to determine the depth of anesthesia in the continuation stage of the anesthesia and to estimate the amount of medicine to be applied at that moment. The feed forward neural networks are also used for comparison. The conjugate gradient algorithm is compared with back propagation (BP) for training of the neural Networks. The applied artificial neural network is composed of three layers, namely the input layer, the hidden layer and the output layer. The nonlinear activation function sigmoid (sigmoid function) has been used in the hidden layer and the output layer. EEG data has been recorded with Nihon Kohden 9200 brand 22-channel EEG device. The international 8-channel bipolar 10-20 montage system (8 TB-b system) has been used in assembling the recording electrodes. EEG data have been recorded by being sampled once in every 2 milliseconds. The artificial neural network has been designed so as to have 60 neurons in the input layer, 30 neurons in the hidden layer and 1 neuron in the output layer. The values of the power spectral density (PSD) of 10-second EEG segments which correspond to the 1-50 Hz frequency range; the ratio of the total power of PSD values of the EEG segment at that moment in the same range to the total of PSD values of EEG segment taken prior to the anesthesia.

  4. Granular neural networks, pattern recognition and bioinformatics

    CERN Document Server

    Pal, Sankar K; Ganivada, Avatharam

    2017-01-01

    This book provides a uniform framework describing how fuzzy rough granular neural network technologies can be formulated and used in building efficient pattern recognition and mining models. It also discusses the formation of granules in the notion of both fuzzy and rough sets. Judicious integration in forming fuzzy-rough information granules based on lower approximate regions enables the network to determine the exactness in class shape as well as to handle the uncertainties arising from overlapping regions, resulting in efficient and speedy learning with enhanced performance. Layered network and self-organizing analysis maps, which have a strong potential in big data, are considered as basic modules,. The book is structured according to the major phases of a pattern recognition system (e.g., classification, clustering, and feature selection) with a balanced mixture of theory, algorithm, and application. It covers the latest findings as well as directions for future research, particularly highlighting bioinf...

  5. Identifying Jets Using Artifical Neural Networks

    Science.gov (United States)

    Rosand, Benjamin; Caines, Helen; Checa, Sofia

    2017-09-01

    We investigate particle jet interactions with the Quark Gluon Plasma (QGP) using artificial neural networks modeled on those used in computer image recognition. We create jet images by binning jet particles into pixels and preprocessing every image. We analyzed the jets with a Multi-layered maxout network and a convolutional network. We demonstrate each network's effectiveness in differentiating simulated quenched jets from unquenched jets, and we investigate the method that the network uses to discriminate among different quenched jet simulations. Finally, we develop a greater understanding of the physics behind quenched jets by investigating what the network learnt as well as its effectiveness in differentiating samples. Yale College Freshman Summer Research Fellowship in the Sciences and Engineering.

  6. Hopfield neural network in HEP track reconstruction

    International Nuclear Information System (INIS)

    Muresan, Raluca; Pentia, Mircea

    1996-01-01

    This work uses neural network technique (Hopfield method) to reconstruct particle tracks starting from a data set obtained with a coordinate detector system placed around a high energy accelerated particle interaction region. A learning algorithm for finding the optimal connection of the signal points have been elaborated and tested. We used a single layer neutral network with constraints in order to obtain the particle tracks drawn through the detected signal points. The dynamics of the systems is given by the MFT equations which determine the system evolution to a minimum energy function. We carried out a computing program that has been tested on a lot of Monte Carlo simulated data. With this program we obtained good results even for noise/signal ratio 200. (authors)

  7. Resting-state hemodynamics are spatiotemporally coupled to synchronized and symmetric neural activity in excitatory neurons

    Science.gov (United States)

    Ma, Ying; Shaik, Mohammed A.; Kozberg, Mariel G.; Portes, Jacob P.; Timerman, Dmitriy

    2016-01-01

    Brain hemodynamics serve as a proxy for neural activity in a range of noninvasive neuroimaging techniques including functional magnetic resonance imaging (fMRI). In resting-state fMRI, hemodynamic fluctuations have been found to exhibit patterns of bilateral synchrony, with correlated regions inferred to have functional connectivity. However, the relationship between resting-state hemodynamics and underlying neural activity has not been well established, making the neural underpinnings of functional connectivity networks unclear. In this study, neural activity and hemodynamics were recorded simultaneously over the bilateral cortex of awake and anesthetized Thy1-GCaMP mice using wide-field optical mapping. Neural activity was visualized via selective expression of the calcium-sensitive fluorophore GCaMP in layer 2/3 and 5 excitatory neurons. Characteristic patterns of resting-state hemodynamics were accompanied by more rapidly changing bilateral patterns of resting-state neural activity. Spatiotemporal hemodynamics could be modeled by convolving this neural activity with hemodynamic response functions derived through both deconvolution and gamma-variate fitting. Simultaneous imaging and electrophysiology confirmed that Thy1-GCaMP signals are well-predicted by multiunit activity. Neurovascular coupling between resting-state neural activity and hemodynamics was robust and fast in awake animals, whereas coupling in urethane-anesthetized animals was slower, and in some cases included lower-frequency (resting-state hemodynamics in the awake and anesthetized brain are coupled to underlying patterns of excitatory neural activity. The patterns of bilaterally-symmetric spontaneous neural activity revealed by wide-field Thy1-GCaMP imaging may depict the neural foundation of functional connectivity networks detected in resting-state fMRI. PMID:27974609

  8. Brain tumor segmentation with Deep Neural Networks.

    Science.gov (United States)

    Havaei, Mohammad; Davy, Axel; Warde-Farley, David; Biard, Antoine; Courville, Aaron; Bengio, Yoshua; Pal, Chris; Jodoin, Pierre-Marc; Larochelle, Hugo

    2017-01-01

    In this paper, we present a fully automatic brain tumor segmentation method based on Deep Neural Networks (DNNs). The proposed networks are tailored to glioblastomas (both low and high grade) pictured in MR images. By their very nature, these tumors can appear anywhere in the brain and have almost any kind of shape, size, and contrast. These reasons motivate our exploration of a machine learning solution that exploits a flexible, high capacity DNN while being extremely efficient. Here, we give a description of different model choices that we've found to be necessary for obtaining competitive performance. We explore in particular different architectures based on Convolutional Neural Networks (CNN), i.e. DNNs specifically adapted to image data. We present a novel CNN architecture which differs from those traditionally used in computer vision. Our CNN exploits both local features as well as more global contextual features simultaneously. Also, different from most traditional uses of CNNs, our networks use a final layer that is a convolutional implementation of a fully connected layer which allows a 40 fold speed up. We also describe a 2-phase training procedure that allows us to tackle difficulties related to the imbalance of tumor labels. Finally, we explore a cascade architecture in which the output of a basic CNN is treated as an additional source of information for a subsequent CNN. Results reported on the 2013 BRATS test data-set reveal that our architecture improves over the currently published state-of-the-art while being over 30 times faster. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Interacting neural networks

    Science.gov (United States)

    Metzler, R.; Kinzel, W.; Kanter, I.

    2000-08-01

    Several scenarios of interacting neural networks which are trained either in an identical or in a competitive way are solved analytically. In the case of identical training each perceptron receives the output of its neighbor. The symmetry of the stationary state as well as the sensitivity to the used training algorithm are investigated. Two competitive perceptrons trained on mutually exclusive learning aims and a perceptron which is trained on the opposite of its own output are examined analytically. An ensemble of competitive perceptrons is used as decision-making algorithms in a model of a closed market (El Farol Bar problem or the Minority Game. In this game, a set of agents who have to make a binary decision is considered.); each network is trained on the history of minority decisions. This ensemble of perceptrons relaxes to a stationary state whose performance can be better than random.

  10. Neural circuitry and immunity

    Science.gov (United States)

    Pavlov, Valentin A.; Tracey, Kevin J.

    2015-01-01

    Research during the last decade has significantly advanced our understanding of the molecular mechanisms at the interface between the nervous system and the immune system. Insight into bidirectional neuroimmune communication has characterized the nervous system as an important partner of the immune system in the regulation of inflammation. Neuronal pathways, including the vagus nerve-based inflammatory reflex are physiological regulators of immune function and inflammation. In parallel, neuronal function is altered in conditions characterized by immune dysregulation and inflammation. Here, we review these regulatory mechanisms and describe the neural circuitry modulating immunity. Understanding these mechanisms reveals possibilities to use targeted neuromodulation as a therapeutic approach for inflammatory and autoimmune disorders. These findings and current clinical exploration of neuromodulation in the treatment of inflammatory diseases defines the emerging field of Bioelectronic Medicine. PMID:26512000

  11. Neural Darwinism and consciousness.

    Science.gov (United States)

    Seth, Anil K; Baars, Bernard J

    2005-03-01

    Neural Darwinism (ND) is a large scale selectionist theory of brain development and function that has been hypothesized to relate to consciousness. According to ND, consciousness is entailed by reentrant interactions among neuronal populations in the thalamocortical system (the 'dynamic core'). These interactions, which permit high-order discriminations among possible core states, confer selective advantages on organisms possessing them by linking current perceptual events to a past history of value-dependent learning. Here, we assess the consistency of ND with 16 widely recognized properties of consciousness, both physiological (for example, consciousness is associated with widespread, relatively fast, low amplitude interactions in the thalamocortical system), and phenomenal (for example, consciousness involves the existence of a private flow of events available only to the experiencing subject). While no theory accounts fully for all of these properties at present, we find that ND and its recent extensions fare well.

  12. Traffic sign recognition based on deep convolutional neural network

    Science.gov (United States)

    Yin, Shi-hao; Deng, Ji-cai; Zhang, Da-wei; Du, Jing-yuan

    2017-11-01

    Traffic sign recognition (TSR) is an important component of automated driving systems. It is a rather challenging task to design a high-performance classifier for the TSR system. In this paper, we propose a new method for TSR system based on deep convolutional neural network. In order to enhance the expression of the network, a novel structure (dubbed block-layer below) which combines network-in-network and residual connection is designed. Our network has 10 layers with parameters (block-layer seen as a single layer): the first seven are alternate convolutional layers and block-layers, and the remaining three are fully-connected layers. We train our TSR network on the German traffic sign recognition benchmark (GTSRB) dataset. To reduce overfitting, we perform data augmentation on the training images and employ a regularization method named "dropout". The activation function we employ in our network adopts scaled exponential linear units (SELUs), which can induce self-normalizing properties. To speed up the training, we use an efficient GPU to accelerate the convolutional operation. On the test dataset of GTSRB, we achieve the accuracy rate of 99.67%, exceeding the state-of-the-art results.

  13. YAP/TAZ enhance mammalian embryonic neural stem cell characteristics in a Tead-dependent manner

    Energy Technology Data Exchange (ETDEWEB)

    Han, Dasol; Byun, Sung-Hyun; Park, Soojeong; Kim, Juwan; Kim, Inhee; Ha, Soobong; Kwon, Mookwang; Yoon, Keejung, E-mail: keejung@skku.edu

    2015-02-27

    Mammalian brain development is regulated by multiple signaling pathways controlling cell proliferation, migration and differentiation. Here we show that YAP/TAZ enhance embryonic neural stem cell characteristics in a cell autonomous fashion using diverse experimental approaches. Introduction of retroviral vectors expressing YAP or TAZ into the mouse embryonic brain induced cell localization in the ventricular zone (VZ), which is the embryonic neural stem cell niche. This change in cell distribution in the cortical layer is due to the increased stemness of infected cells; YAP-expressing cells were colabeled with Sox2, a neural stem cell marker, and YAP/TAZ increased the frequency and size of neurospheres, indicating enhanced self-renewal- and proliferative ability of neural stem cells. These effects appear to be TEA domain family transcription factor (Tead)–dependent; a Tead binding-defective YAP mutant lost the ability to promote neural stem cell characteristics. Consistently, in utero gene transfer of a constitutively active form of Tead2 (Tead2-VP16) recapitulated all the features of YAP/TAZ overexpression, and dominant negative Tead2-EnR resulted in marked cell exit from the VZ toward outer cortical layers. Taken together, these results indicate that the Tead-dependent YAP/TAZ signaling pathway plays important roles in neural stem cell maintenance by enhancing stemness of neural stem cells during mammalian brain development. - Highlights: • Roles of YAP and Tead in vivo during mammalian brain development are clarified. • Expression of YAP promotes embryonic neural stem cell characteristics in vivo in a cell autonomous fashion. • Enhancement of neural stem cell characteristics by YAP depends on Tead. • Transcriptionally active form of Tead alone can recapitulate the effects of YAP. • Transcriptionally repressive form of Tead severely reduces stem cell characteristics.

  14. YAP/TAZ enhance mammalian embryonic neural stem cell characteristics in a Tead-dependent manner

    International Nuclear Information System (INIS)

    Han, Dasol; Byun, Sung-Hyun; Park, Soojeong; Kim, Juwan; Kim, Inhee; Ha, Soobong; Kwon, Mookwang; Yoon, Keejung

    2015-01-01

    Mammalian brain development is regulated by multiple signaling pathways controlling cell proliferation, migration and differentiation. Here we show that YAP/TAZ enhance embryonic neural stem cell characteristics in a cell autonomous fashion using diverse experimental approaches. Introduction of retroviral vectors expressing YAP or TAZ into the mouse embryonic brain induced cell localization in the ventricular zone (VZ), which is the embryonic neural stem cell niche. This change in cell distribution in the cortical layer is due to the increased stemness of infected cells; YAP-expressing cells were colabeled with Sox2, a neural stem cell marker, and YAP/TAZ increased the frequency and size of neurospheres, indicating enhanced self-renewal- and proliferative ability of neural stem cells. These effects appear to be TEA domain family transcription factor (Tead)–dependent; a Tead binding-defective YAP mutant lost the ability to promote neural stem cell characteristics. Consistently, in utero gene transfer of a constitutively active form of Tead2 (Tead2-VP16) recapitulated all the features of YAP/TAZ overexpression, and dominant negative Tead2-EnR resulted in marked cell exit from the VZ toward outer cortical layers. Taken together, these results indicate that the Tead-dependent YAP/TAZ signaling pathway plays important roles in neural stem cell maintenance by enhancing stemness of neural stem cells during mammalian brain development. - Highlights: • Roles of YAP and Tead in vivo during mammalian brain development are clarified. • Expression of YAP promotes embryonic neural stem cell characteristics in vivo in a cell autonomous fashion. • Enhancement of neural stem cell characteristics by YAP depends on Tead. • Transcriptionally active form of Tead alone can recapitulate the effects of YAP. • Transcriptionally repressive form of Tead severely reduces stem cell characteristics

  15. Layer-by-layer films assembled from natural polymers for sustained release of neurotrophin

    International Nuclear Information System (INIS)

    Zhang, Zhiling; Li, Qianqi; Han, Lin; Zhong, Yinghui

    2015-01-01

    Cortical neural prostheses (CNPs) hold great promise for paralyzed patients by recording neural signals from the brain and translating them into movement commands. However, these electrodes normally fail to record neural signals weeks to months after implantation due to inflammation and neuronal loss around the implanted neural electrodes. Sustained local delivery of neurotrophins from biocompatible coatings on CNPs can potentially promote neuron survival and attract the nearby neurons to migrate toward the electrodes to increase neuron density at the electrode/brain interface, which is important for maintaining the recording quality and long-term performance of the implanted CNPs. However, sustained release of neurotrophins from biocompatible ultrathin coatings is very difficult to achieve. In this study, we investigated the potential of several biocompatible natural polyanions including heparin, dextran sulfate, and gelatin to form layer-by-layer (LbL) assembly with positively charged neurotrophin nerve growth factor (NGF) and its model protein lysozyme, and whether sustained release of NGF and lysozyme can be achieved from the nanoscale thin LbL coatings. We found that gelatin, which is less negatively charged than heparin and dextran sulfate, showed the highest efficacy in loading proteins into the LbL films because other interactions in addition to electrostatic interactions were involved in LbL assembly. Sustained release of NGF and lysozymes for approximately 2 weeks was achieved from the gelatin-based LbL coatings. Released NGF maintained the bioactivity to stimulate neurite outgrowth from PC12 cells. Gelatin is generally recognized as safe by the FDA. Thus, the biocompatible LbL coating developed in this study is highly promising to be used for implanted CNPs to improve their long-term performance in human patients. (paper)

  16. Program Helps Simulate Neural Networks

    Science.gov (United States)

    Villarreal, James; Mcintire, Gary

    1993-01-01

    Neural Network Environment on Transputer System (NNETS) computer program provides users high degree of flexibility in creating and manipulating wide variety of neural-network topologies at processing speeds not found in conventional computing environments. Supports back-propagation and back-propagation-related algorithms. Back-propagation algorithm used is implementation of Rumelhart's generalized delta rule. NNETS developed on INMOS Transputer(R). Predefines back-propagation network, Jordan network, and reinforcement network to assist users in learning and defining own networks. Also enables users to configure other neural-network paradigms from NNETS basic architecture. Small portion of software written in OCCAM(R) language.

  17. Artificial Neural Network Analysis System

    Science.gov (United States)

    2001-02-27

    Contract No. DASG60-00-M-0201 Purchase request no.: Foot in the Door-01 Title Name: Artificial Neural Network Analysis System Company: Atlantic... Artificial Neural Network Analysis System 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Powell, Bruce C 5d. PROJECT NUMBER 5e. TASK NUMBER...34) 27-02-2001 Report Type N/A Dates Covered (from... to) ("DD MON YYYY") 28-10-2000 27-02-2001 Title and Subtitle Artificial Neural Network Analysis

  18. Cooperating attackers in neural cryptography.

    Science.gov (United States)

    Shacham, Lanir N; Klein, Einat; Mislovaty, Rachel; Kanter, Ido; Kinzel, Wolfgang

    2004-06-01

    A successful attack strategy in neural cryptography is presented. The neural cryptosystem, based on synchronization of neural networks by mutual learning, has been recently shown to be secure under different attack strategies. The success of the advanced attacker presented here, called the "majority-flipping attacker," does not decay with the parameters of the model. This attacker's outstanding success is due to its using a group of attackers which cooperate throughout the synchronization process, unlike any other attack strategy known. An analytical description of this attack is also presented, and fits the results of simulations.

  19. Creative-Dynamics Approach To Neural Intelligence

    Science.gov (United States)

    Zak, Michail A.

    1992-01-01

    Paper discusses approach to mathematical modeling of artificial neural networks exhibiting complicated behaviors reminiscent of creativity and intelligence of biological neural networks. Neural network treated as non-Lipschitzian dynamical system - as described in "Non-Lipschitzian Dynamics For Modeling Neural Networks" (NPO-17814). System serves as tool for modeling of temporal-pattern memories and recognition of complicated spatial patterns.

  20. Multi-layers castings

    Directory of Open Access Journals (Sweden)

    J. Szajnar

    2010-01-01

    Full Text Available In paper is presented the possibility of making of multi-layers cast steel castings in result of connection of casting and welding coating technologies. First layer was composite surface layer on the basis of Fe-Cr-C alloy, which was put directly in founding process of cast carbon steel 200–450 with use of preparation of mould cavity method. Second layer were padding welds, which were put with use of TIG – Tungsten Inert Gas surfacing by welding technology with filler on Ni matrix, Ni and Co matrix with wolfram carbides WC and on the basis on Fe-Cr-C alloy, which has the same chemical composition with alloy, which was used for making of composite surface layer. Usability for industrial applications of surface layers of castings were estimated by criterion of hardness and abrasive wear resistance of type metal-mineral.

  1. Synchronization in networks with multiple interaction layers

    Science.gov (United States)

    del Genio, Charo I.; Gómez-Gardeñes, Jesús; Bonamassa, Ivan; Boccaletti, Stefano

    2016-01-01

    The structure of many real-world systems is best captured by networks consisting of several interaction layers. Understanding how a multilayered structure of connections affects the synchronization properties of dynamical systems evolving on top of it is a highly relevant endeavor in mathematics and physics and has potential applications in several socially relevant topics, such as power grid engineering and neural dynamics. We propose a general framework to assess the stability of the synchronized state in networks with multiple interaction layers, deriving a necessary condition that generalizes the master stability function approach. We validate our method by applying it to a network of Rössler oscillators with a double layer of interactions and show that highly rich phenomenology emerges from this. This includes cases where the stability of synchronization can be induced even if both layers would have individually induced unstable synchrony, an effect genuinely arising from the true multilayer structure of the interactions among the units in the network. PMID:28138540

  2. Layered Ensemble Architecture for Time Series Forecasting.

    Science.gov (United States)

    Rahman, Md Mustafizur; Islam, Md Monirul; Murase, Kazuyuki; Yao, Xin

    2016-01-01

    Time series forecasting (TSF) has been widely used in many application areas such as science, engineering, and finance. The phenomena generating time series are usually unknown and information available for forecasting is only limited to the past values of the series. It is, therefore, necessary to use an appropriate number of past values, termed lag, for forecasting. This paper proposes a layered ensemble architecture (LEA) for TSF problems. Our LEA consists of two layers, each of which uses an ensemble of multilayer perceptron (MLP) networks. While the first ensemble layer tries to find an appropriate lag, the second ensemble layer employs the obtained lag for forecasting. Unlike most previous work on TSF, the proposed architecture considers both accuracy and diversity of the individual networks in constructing an ensemble. LEA trains different networks in the ensemble by using different training sets with an aim of maintaining diversity among the networks. However, it uses the appropriate lag and combines the best trained networks to construct the ensemble. This indicates LEAs emphasis on accuracy of the networks. The proposed architecture has been tested extensively on time series data of neural network (NN)3 and NN5 competitions. It has also been tested on several standard benchmark time series data. In terms of forecasting accuracy, our experimental results have revealed clearly that LEA is better than other ensemble and nonensemble methods.

  3. A double layer review

    International Nuclear Information System (INIS)

    Block, L.P.

    1977-06-01

    A review of the main results on electrostatic double layers (sometimes called space charge layers or sheaths) obtained from theory, and laboratory and space experiments up to the spring of 1977 is given. By means of barium jets and satellite probes, double layers have now been found at the altitudes, earlier predicted theoretically. The general potential distribution above the auroral zone, suggested by inverted V-events and electric field reversals, is corroborated. (author)

  4. Two layer powder pressing

    International Nuclear Information System (INIS)

    Schreiner, H.

    1979-01-01

    First, significance and advantages of sintered materials consisting of two layers are pointed out. By means of the two layer powder pressing technique metal powders are formed resulting in compacts with high accuracy of shape and mass. Attributes of basic powders, different filling methods and pressing techniques are discussed. The described technique is supposed to find further applications in the field of two layer compacts in the near future

  5. Economical Atomic Layer Deposition

    Science.gov (United States)

    Wyman, Richard; Davis, Robert; Linford, Matthew

    2010-10-01

    Atomic Layer Deposition is a self limiting deposition process that can produce films at a user specified height. At BYU we have designed a low cost and automated atomic layer deposition system. We have used the system to deposit silicon dioxide at room temperature using silicon tetrachloride and tetramethyl orthosilicate. Basics of atomic layer deposition, the system set up, automation techniques and our system's characterization are discussed.

  6. Stable Boundary Layer Issues

    OpenAIRE

    Steeneveld, G.J.

    2012-01-01

    Understanding and prediction of the stable atmospheric boundary layer is a challenging task. Many physical processes are relevant in the stable boundary layer, i.e. turbulence, radiation, land surface coupling, orographic turbulent and gravity wave drag, and land surface heterogeneity. The development of robust stable boundary layer parameterizations for use in NWP and climate models is hampered by the multiplicity of processes and their unknown interactions. As a result, these models suffer ...

  7. Layered plasma polymer composite membranes

    Science.gov (United States)

    Babcock, Walter C.

    1994-01-01

    Layered plasma polymer composite fluid separation membranes are disclosed, which comprise alternating selective and permeable layers for a total of at least 2n layers, where n is .gtoreq.2 and is the number of selective layers.

  8. Formation of double layers

    International Nuclear Information System (INIS)

    Leung, P.; Wong, A.Y.; Quon, B.H.

    1981-01-01

    Experiments on both stationary and propagating double layers and a related analytical model are described. Stationary double layers were produced in a multiple plasma device, in which an electron drift current was present. An investigation of the plasma parameters for the stable double layer condition is described. The particle distribution in the stable double layer establishes a potential profile, which creates electron and ion beams that excite plasma instabilities. The measured characteristics of the instabilities are consistent with the existence of the double layer. Propagating double layers are formed when the initial electron drift current is large. Ths slopes of the transition region increase as they propagate. A physical model for the formation of a double layer in the experimental device is described. This model explains the formation of the low potential region on the basis of the space charge. This space charge is created by the electron drift current. The model also accounts for the role of ions in double layer formation and explains the formation of moving double layers. (Auth.)

  9. Electroless atomic layer deposition

    Science.gov (United States)

    Robinson, David Bruce; Cappillino, Patrick J.; Sheridan, Leah B.; Stickney, John L.; Benson, David M.

    2017-10-31

    A method of electroless atomic layer deposition is described. The method electrolessly generates a layer of sacrificial material on a surface of a first material. The method adds doses of a solution of a second material to the substrate. The method performs a galvanic exchange reaction to oxidize away the layer of the sacrificial material and deposit a layer of the second material on the surface of the first material. The method can be repeated for a plurality of iterations in order to deposit a desired thickness of the second material on the surface of the first material.

  10. APPLICATION OF NEURAL NETWORK ALGORITHMS FOR BPM LINEARIZATION

    Energy Technology Data Exchange (ETDEWEB)

    Musson, John C. [JLAB; Seaton, Chad [JLAB; Spata, Mike F. [JLAB; Yan, Jianxun [JLAB

    2012-11-01

    Stripline BPM sensors contain inherent non-linearities, as a result of field distortions from the pickup elements. Many methods have been devised to facilitate corrections, often employing polynomial fitting. The cost of computation makes real-time correction difficult, particulalry when integer math is utilized. The application of neural-network technology, particularly the multi-layer perceptron algorithm, is proposed as an efficient alternative for electrode linearization. A process of supervised learning is initially used to determine the weighting coefficients, which are subsequently applied to the incoming electrode data. A non-linear layer, known as an activation layer, is responsible for the removal of saturation effects. Implementation of a perceptron in an FPGA-based software-defined radio (SDR) is presented, along with performance comparisons. In addition, efficient calculation of the sigmoidal activation function via the CORDIC algorithm is presented.

  11. Neural components of altruistic punishment

    Directory of Open Access Journals (Sweden)

    Emily eDu

    2015-02-01

    Full Text Available Altruistic punishment, which occurs when an individual incurs a cost to punish in response to unfairness or a norm violation, may play a role in perpetuating cooperation. The neural correlates underlying costly punishment have only recently begun to be explored. Here we review the current state of research on the neural basis of altruism from the perspectives of costly punishment, emphasizing the importance of characterizing elementary neural processes underlying a decision to punish. In particular, we emphasize three cognitive processes that contribute to the decision to altruistically punish in most scenarios: inequity aversion, cost-benefit calculation, and social reference frame to distinguish self from others. Overall, we argue for the importance of understanding the neural correlates of altruistic punishment with respect to the core computations necessary to achieve a decision to punish.

  12. Neural complexity, dissociation, and schizophrenia

    Czech Academy of Sciences Publication Activity Database

    Bob, P.; Šusta, M.; Chládek, Jan; Glaslová, K.; Fedor-Ferybergh, P.

    2007-01-01

    Roč. 13, č. 10 (2007), HY1-5 ISSN 1234-1010 Institutional research plan: CEZ:AV0Z20650511 Keywords : neural complexity * dissociation * schizophrenia Subject RIV: FH - Neurology Impact factor: 1.607, year: 2007

  13. Complex-Valued Neural Networks

    CERN Document Server

    Hirose, Akira

    2012-01-01

    This book is the second enlarged and revised edition of the first successful monograph on complex-valued neural networks (CVNNs) published in 2006, which lends itself to graduate and undergraduate courses in electrical engineering, informatics, control engineering, mechanics, robotics, bioengineering, and other relevant fields. In the second edition the recent trends in CVNNs research are included, resulting in e.g. almost a doubled number of references. The parametron invented in 1954 is also referred to with discussion on analogy and disparity. Also various additional arguments on the advantages of the complex-valued neural networks enhancing the difference to real-valued neural networks are given in various sections. The book is useful for those beginning their studies, for instance, in adaptive signal processing for highly functional sensing and imaging, control in unknown and changing environment, robotics inspired by human neural systems, and brain-like information processing, as well as interdisciplina...

  14. Artificial intelligence: Deep neural reasoning

    Science.gov (United States)

    Jaeger, Herbert

    2016-10-01

    The human brain can solve highly abstract reasoning problems using a neural network that is entirely physical. The underlying mechanisms are only partially understood, but an artificial network provides valuable insight. See Article p.471

  15. Optical Neural Network Classifier Architectures

    National Research Council Canada - National Science Library

    Getbehead, Mark

    1998-01-01

    We present an adaptive opto-electronic neural network hardware architecture capable of exploiting parallel optics to realize real-time processing and classification of high-dimensional data for Air...

  16. Memristor-based neural networks

    International Nuclear Information System (INIS)

    Thomas, Andy

    2013-01-01

    The synapse is a crucial element in biological neural networks, but a simple electronic equivalent has been absent. This complicates the development of hardware that imitates biological architectures in the nervous system. Now, the recent progress in the experimental realization of memristive devices has renewed interest in artificial neural networks. The resistance of a memristive system depends on its past states and exactly this functionality can be used to mimic the synaptic connections in a (human) brain. After a short introduction to memristors, we present and explain the relevant mechanisms in a biological neural network, such as long-term potentiation and spike time-dependent plasticity, and determine the minimal requirements for an artificial neural network. We review the implementations of these processes using basic electric circuits and more complex mechanisms that either imitate biological systems or could act as a model system for them. (topical review)

  17. DeepNet: An Ultrafast Neural Learning Code for Seismic Imaging

    International Nuclear Information System (INIS)

    Barhen, J.; Protopopescu, V.; Reister, D.

    1999-01-01

    A feed-forward multilayer neural net is trained to learn the correspondence between seismic data and well logs. The introduction of a virtual input layer, connected to the nominal input layer through a special nonlinear transfer function, enables ultrafast (single iteration), near-optimal training of the net using numerical algebraic techniques. A unique computer code, named DeepNet, has been developed, that has achieved, in actual field demonstrations, results unattainable to date with industry standard tools

  18. Random neural Q-learning for obstacle avoidance of a mobile robot in unknown environments

    Directory of Open Access Journals (Sweden)

    Jing Yang

    2016-07-01

    Full Text Available The article presents a random neural Q-learning strategy for the obstacle avoidance problem of an autonomous mobile robot in unknown environments. In the proposed strategy, two independent modules, namely, avoidance without considering the target and goal-seeking without considering obstacles, are first trained using the proposed random neural Q-learning algorithm to obtain their best control policies. Then, the two trained modules are combined based on a switching function to realize the obstacle avoidance in unknown environments. For the proposed random neural Q-learning algorithm, a single-hidden layer feedforward network is used to approximate the Q-function to estimate the Q-value. The parameters of the single-hidden layer feedforward network are modified using the recently proposed neural algorithm named the online sequential version of extreme learning machine, where the parameters of the hidden nodes are assigned randomly and the sample data can come one by one. However, different from the original online sequential version of extreme learning machine algorithm, the initial output weights are estimated subjected to quadratic inequality constraint to improve the convergence speed. Finally, the simulation results demonstrate that the proposed random neural Q-learning strategy can successfully solve the obstacle avoidance problem. Also, the higher learning efficiency and better generalization ability are achieved by the proposed random neural Q-learning algorithm compared with the Q-learning based on the back-propagation method.

  19. Bioelectrochemical control of neural cell development on conducting polymers.

    Science.gov (United States)

    Collazos-Castro, Jorge E; Polo, José L; Hernández-Labrado, Gabriel R; Padial-Cañete, Vanesa; García-Rama, Concepción

    2010-12-01

    Electrically conducting polymers hold promise for developing advanced neuroprostheses, bionic systems and neural repair devices. Among them, poly(3, 4-ethylenedioxythiophene) doped with polystyrene sulfonate (PEDOT:PSS) exhibits superior physicochemical properties but biocompatibility issues have limited its use. We describe combinations of electrochemical and molecule self-assembling methods to consistently control neural cell development on PEDOT:PSS while maintaining very low interfacial impedance. Electro-adsorbed polylysine enabled long-term neuronal survival and growth on the nanostructured polymer. Neurite extension was strongly inhibited by an additional layer of PSS or heparin, which in turn could be either removed electrically or further coated with spermine to activate cell growth. Binding basic fibroblast growth factor (bFGF) to the heparin layer inhibited neurons but promoted proliferation and migration of precursor cells. This methodology may orchestrate neural cell behavior on electroactive polymers, thus improving cell/electrode communication in prosthetic devices and providing a platform for tissue repair strategies. Copyright © 2010 Elsevier Ltd. All rights reserved.

  20. Constructing general partial differential equations using polynomial and neural networks.

    Science.gov (United States)

    Zjavka, Ladislav; Pedrycz, Witold

    2016-01-01

    Sum fraction terms can approximate multi-variable functions on the basis of discrete observations, replacing a partial differential equation definition with polynomial elementary data relation descriptions. Artificial neural networks commonly transform the weighted sum of inputs to describe overall similarity relationships of trained and new testing input patterns. Differential polynomial neural networks form a new class of neural networks, which construct and solve an unknown general partial differential equation of a function of interest with selected substitution relative terms using non-linear multi-variable composite polynomials. The layers of the network generate simple and composite relative substitution terms whose convergent series combinations can describe partial dependent derivative changes of the input variables. This regression is based on trained generalized partial derivative data relations, decomposed into a multi-layer polynomial network structure. The sigmoidal function, commonly used as a nonlinear activation of artificial neurons, may transform some polynomial items together with the parameters with the aim to improve the polynomial derivative term series ability to approximate complicated periodic functions, as simple low order polynomials are not able to fully make up for the complete cycles. The similarity analysis facilitates substitutions for differential equations or can form dimensional units from data samples to describe real-world problems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Comparison of 2D and 3D neural induction methods for the generation of neural progenitor cells from human induced pluripotent stem cells.

    Science.gov (United States)

    Chandrasekaran, Abinaya; Avci, Hasan X; Ochalek, Anna; Rösingh, Lone N; Molnár, Kinga; László, Lajos; Bellák, Tamás; Téglási, Annamária; Pesti, Krisztina; Mike, Arpad; Phanthong, Phetcharat; Bíró, Orsolya; Hall, Vanessa; Kitiyanant, Narisorn; Krause, Karl-Heinz; Kobolák, Julianna; Dinnyés, András

    2017-12-01

    Neural progenitor cells (NPCs) from human induced pluripotent stem cells (hiPSCs) are frequently induced using 3D culture methodologies however, it is unknown whether spheroid-based (3D) neural induction is actually superior to monolayer (2D) neural induction. Our aim was to compare the efficiency of 2D induction with 3D induction method in their ability to generate NPCs, and subsequently neurons and astrocytes. Neural differentiation was analysed at the protein level qualitatively by immunocytochemistry and quantitatively by flow cytometry for NPC (SOX1, PAX6, NESTIN), neuronal (MAP2, TUBB3), cortical layer (TBR1, CUX1) and glial markers (SOX9, GFAP, AQP4). Electron microscopy demonstrated that both methods resulted in morphologically similar neural rosettes. However, quantification of NPCs derived from 3D neural induction exhibited an increase in the number of PAX6/NESTIN double positive cells and the derived neurons exhibited longer neurites. In contrast, 2D neural induction resulted in more SOX1 positive cells. While 2D monolayer induction resulted in slightly less mature neurons, at an early stage of differentiation, the patch clamp analysis failed to reveal any significant differences between the electrophysiological properties between the two induction methods. In conclusion, 3D neural induction increases the yield of PAX6 + /NESTIN + cells and gives rise to neurons with longer neurites, which might be an advantage for the production of forebrain cortical neurons, highlighting the potential of 3D neural induction, independent of iPSCs' genetic background. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  2. Implantable Neural Interfaces for Sharks

    Science.gov (United States)

    2007-05-01

    technology for recording and stimulating from the auditory and olfactory sensory nervous systems of the awake, swimming nurse shark , G. cirratum (Figures...overlay of the central nervous system of the nurse shark on a horizontal MR image. Implantable Neural Interfaces for Sharks ...Neural Interfaces for Characterizing Population Responses to Odorants and Electrical Stimuli in the Nurse Shark , Ginglymostoma cirratum.” AChemS Abs

  3. What are artificial neural networks?

    DEFF Research Database (Denmark)

    Krogh, Anders

    2008-01-01

    Artificial neural networks have been applied to problems ranging from speech recognition to prediction of protein secondary structure, classification of cancers and gene prediction. How do they work and what might they be good for? Udgivelsesdato: 2008-Feb......Artificial neural networks have been applied to problems ranging from speech recognition to prediction of protein secondary structure, classification of cancers and gene prediction. How do they work and what might they be good for? Udgivelsesdato: 2008-Feb...

  4. Neural networks and its application in biomedical engineering

    International Nuclear Information System (INIS)

    Husnain, S.K.; Bhatti, M.I.

    2002-01-01

    Artificial network (ANNs) is an information processing system that has certain performance characteristics in common with biological neural networks. A neural network is characterized by connections between the neurons, method of determining the weights on the connections and its activation functions while a biological neuron has three types of components that are of particular interest in understanding an artificial neuron: its dendrites, soma, and axon. The actin of the chemical transmitter modifies the incoming signal. The study of neural networks is an extremely interdisciplinary field. Computer-based diagnosis is an increasingly used method that tries to improve the quality of health care. Systems on Neural Networks have been developed extensively in the last ten years with the hope that medical diagnosis and therefore medical care would improve dramatically. The addition of a symbolic processing layer enhances the ANNs in a number of ways. It is, for instance, possible to supplement a network that is purely diagnostic with a level that recommends or nodes in order to more closely simulate the nervous system. (author)

  5. Face recognition: a convolutional neural-network approach.

    Science.gov (United States)

    Lawrence, S; Giles, C L; Tsoi, A C; Back, A D

    1997-01-01

    We present a hybrid neural-network for human face recognition which compares favourably with other methods. The system combines local image sampling, a self-organizing map (SOM) neural network, and a convolutional neural network. The SOM provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the SOM, and a multilayer perceptron (MLP) in place of the convolutional network for comparison. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze the computational complexity and discuss how new classes could be added to the trained recognizer.

  6. UNMANNED AIR VEHICLE STABILIZATION BASED ON NEURAL NETWORK REGULATOR

    Directory of Open Access Journals (Sweden)

    S. S. Andropov

    2016-09-01

    Full Text Available A problem of stabilizing for the multirotor unmanned aerial vehicle in an environment with external disturbances is researched. A classic proportional-integral-derivative controller is analyzed, its flaws are outlined: inability to respond to changing of external conditions and the need for manual adjustment of coefficients. The paper presents an adaptive adjustment method for coefficients of the proportional-integral-derivative controller based on neural networks. A neural network structure, its input and output data are described. Neural networks with three layers are used to create an adaptive stabilization system for the multirotor unmanned aerial vehicle. Training of the networks is done with the back propagation method. Each neural network produces regulator coefficients for each angle of stabilization as its output. A method for network training is explained. Several graphs of transition process on different stages of learning, including processes with external disturbances, are presented. It is shown that the system meets stabilization requirements with sufficient number of iterations. Described adjustment method for coefficients can be used in remote control of unmanned aerial vehicles, operating in the changing environment.

  7. Neural Dynamics and Information Representation in Microcircuits of Motor Cortex

    Directory of Open Access Journals (Sweden)

    Yasuhiro eTsubo

    2013-05-01

    Full Text Available The brain has to analyze and respond to external events that can change rapidly from time to time, suggesting that information processing by the brain may be essentially dynamic rather than static. The dynamical features of neural computation are of significant importance in motor cortex that governs the process of movement generation and learning. In this paper, we discuss these features based primarily on our recent findings on neural dynamics and information coding in the microcircuit of rat motor cortex. In fact, cortical neurons show a variety of dynamical behavior from rhythmic activity in various frequency bands to highly irregular spike firing. Of particular interest are the similarity and dissimilarity of the neuronal response properties in different layers of motor cortex. By conducting electrophysiological recordings in slice preparation, we report the phase response curves of neurons in different cortical layers to demonstrate their layer-dependent synchronization properties. We then study how motor cortex recruits task-related neurons in different layers for voluntary arm movements by simultaneous juxtacellular and multiunit recordings from behaving rats. The results suggest an interesting difference in the spectrum of functional activity between the superficial and deep layers. Furthermore, the task-related activities recorded from various layers exhibited power law distributions of inter-spike intervals (ISIs, in contrast to a general belief that ISIs obey Poisson or Gamma distributions in cortical neurons. We present a theoretical argument that this power law of in vivo neurons may represent the maximization of the entropy of firing rate with limited energy consumption of spike generation. Though further studies are required to fully clarify the functional implications of this coding principle, it may shed new light on information representations by neurons and circuits in motor cortex.

  8. Do Convolutional Neural Networks Learn Class Hierarchy?

    Science.gov (United States)

    Bilal, Alsallakh; Jourabloo, Amin; Ye, Mao; Liu, Xiaoming; Ren, Liu

    2018-01-01

    Convolutional Neural Networks (CNNs) currently achieve state-of-the-art accuracy in image classification. With a growing number of classes, the accuracy usually drops as the possibilities of confusion increase. Interestingly, the class confusion patterns follow a hierarchical structure over the classes. We present visual-analytics methods to reveal and analyze this hierarchy of similar classes in relation with CNN-internal data. We found that this hierarchy not only dictates the confusion patterns between the classes, it furthermore dictates the learning behavior of CNNs. In particular, the early layers in these networks develop feature detectors that can separate high-level groups of classes quite well, even after a few training epochs. In contrast, the latter layers require substantially more epochs to develop specialized feature detectors that can separate individual classes. We demonstrate how these insights are key to significant improvement in accuracy by designing hierarchy-aware CNNs that accelerate model convergence and alleviate overfitting. We further demonstrate how our methods help in identifying various quality issues in the training data.

  9. Isolation and culture of neural crest cells from embryonic murine neural tube.

    Science.gov (United States)

    Pfaltzgraff, Elise R; Mundell, Nathan A; Labosky, Patricia A

    2012-06-02

    The embryonic neural crest (NC) is a multipotent progenitor population that originates at the dorsal aspect of the neural tube, undergoes an epithelial to mesenchymal transition (EMT) and migrates throughout the embryo, giving rise to diverse cell types. NC also has the unique ability to influence the differentiation and maturation of target organs. When explanted in vitro, NC progenitors undergo self-renewal, migrate and differentiate into a variety of tissue types including neurons, glia, smooth muscle cells, cartilage and bone. NC multipotency was first described from explants of the avian neural tube. In vitro isolation of NC cells facilitates the study of NC dynamics including proliferation, migration, and multipotency. Further work in the avian and rat systems demonstrated that explanted NC cells retain their NC potential when transplanted back into the embryo. Because these inherent cellular properties are preserved in explanted NC progenitors, the neural tube explant assay provides an attractive option for studying the NC in vitro. To attain a better understanding of the mammalian NC, many methods have been employed to isolate NC populations. NC-derived progenitors can be cultured from post-migratory locations in both the embryo and adult to study the dynamics of post-migratory NC progenitors, however isolation of NC progenitors as they emigrate from the neural tube provides optimal preservation of NC cell potential and migratory properties. Some protocols employ fluorescence activated cell sorting (FACS) to isolate a NC population enriched for particular progenitors. However, when starting with early stage embryos, cell numbers adequate for analyses are difficult to obtain with FACS, complicating the isolation of early NC populations from individual embryos. Here, we describe an approach that does not rely on FACS and results in an approximately 96% pure NC population based on a Wnt1-Cre activated lineage reporter. The method presented here is adapted from

  10. Neural correlates of hate.

    Directory of Open Access Journals (Sweden)

    Semir Zeki

    Full Text Available In this work, we address an important but unexplored topic, namely the neural correlates of hate. In a block-design fMRI study, we scanned 17 normal human subjects while they viewed the face of a person they hated and also faces of acquaintances for whom they had neutral feelings. A hate score was obtained for the object of hate for each subject and this was used as a covariate in a between-subject random effects analysis. Viewing a hated face resulted in increased activity in the medial frontal gyrus, right putamen, bilaterally in premotor cortex, in the frontal pole and bilaterally in the medial insula. We also found three areas where activation correlated linearly with the declared level of hatred, the right insula, right premotor cortex and the right fronto-medial gyrus. One area of deactivation was found in the right superior frontal gyrus. The study thus shows that there is a unique pattern of activity in the brain in the context of hate. Though distinct from the pattern of activity that correlates with romantic love, this pattern nevertheless shares two areas with the latter, namely the putamen and the insula.

  11. neural control system

    International Nuclear Information System (INIS)

    Elshazly, A.A.E.

    2002-01-01

    Automatic power stabilization control is the desired objective for any reactor operation , especially, nuclear power plants. A major problem in this area is inevitable gap between a real plant ant the theory of conventional analysis and the synthesis of linear time invariant systems. in particular, the trajectory tracking control of a nonlinear plant is a class of problems in which the classical linear transfer function methods break down because no transfer function can represent the system over the entire operating region . there is a considerable amount of research on the model-inverse approach using feedback linearization technique. however, this method requires a prices plant model to implement the exact linearizing feedback, for nuclear reactor systems, this approach is not an easy task because of the uncertainty in the plant parameters and un-measurable state variables . therefore, artificial neural network (ANN) is used either in self-tuning control or in improving the conventional rule-based exper system.the main objective of this thesis is to suggest an ANN, based self-learning controller structure . this method is capable of on-line reinforcement learning and control for a nuclear reactor with a totally unknown dynamics model. previously, researches are based on back- propagation algorithm . back -propagation (BP), fast back -propagation (FBP), and levenberg-marquardt (LM), algorithms are discussed and compared for reinforcement learning. it is found that, LM algorithm is quite superior

  12. Multi-layer monochromator

    International Nuclear Information System (INIS)

    Schoenborn, B.P.; Caspar, D.L.D.

    1975-01-01

    This invention provides an artificial monochromator crystal for efficiently selecting a narrow band of neutron wavelengths from a neutron beam having a Maxwellian wavelength distribution, by providing on a substrate a plurality of germanium layers, and alternate periodic layers of a different metal having tailored thicknesses, shapes, and volumetric and neutron scattering densities. (U.S.)

  13. Ozone Layer Protection

    Science.gov (United States)

    ... and Research Centers Contact Us Share Ozone Layer Protection The stratospheric ozone layer is Earth’s “sunscreen” – protecting ... GreenChill Partnership Responsible Appliance Disposal (RAD) Program Ozone Protection vs. Ozone Pollution This website addresses stratospheric ozone ...

  14. Skin layer mechanics

    NARCIS (Netherlands)

    Geerligs, M.

    2010-01-01

    The human skin is composed of several layers, each with an unique structure and function. Knowledge about the mechanical behavior of these skin layers is important for clinical and cosmetic research, such as the development of personal care products and the understanding of skin diseases. Until

  15. Stable Boundary Layer Issues

    NARCIS (Netherlands)

    Steeneveld, G.J.

    2012-01-01

    Understanding and prediction of the stable atmospheric boundary layer is a challenging task. Many physical processes are relevant in the stable boundary layer, i.e. turbulence, radiation, land surface coupling, orographic turbulent and gravity wave drag, and land surface heterogeneity. The

  16. Influence of neural adaptation on dynamics and equilibrium state of neural activities in a ring neural network

    Science.gov (United States)

    Takiyama, Ken

    2017-12-01

    How neural adaptation affects neural information processing (i.e. the dynamics and equilibrium state of neural activities) is a central question in computational neuroscience. In my previous works, I analytically clarified the dynamics and equilibrium state of neural activities in a ring-type neural network model that is widely used to model the visual cortex, motor cortex, and several other brain regions. The neural dynamics and the equilibrium state in the neural network model corresponded to a Bayesian computation and statistically optimal multiple information integration, respectively, under a biologically inspired condition. These results were revealed in an analytically tractable manner; however, adaptation effects were not considered. Here, I analytically reveal how the dynamics and equilibrium state of neural activities in a ring neural network are influenced by spike-frequency adaptation (SFA). SFA is an adaptation that causes gradual inhibition of neural activity when a sustained stimulus is applied, and the strength of this inhibition depends on neural activities. I reveal that SFA plays three roles: (1) SFA amplifies the influence of external input in neural dynamics; (2) SFA allows the history of the external input to affect neural dynamics; and (3) the equilibrium state corresponds to the statistically optimal multiple information integration independent of the existence of SFA. In addition, the equilibrium state in a ring neural network model corresponds to the statistically optimal integration of multiple information sources under biologically inspired conditions, independent of the existence of SFA.

  17. Development of boundary layers

    International Nuclear Information System (INIS)

    Herbst, R.

    1980-01-01

    Boundary layers develop along the blade surfaces on both the pressure and the suction side in a non-stationary flow field. This is due to the fact that there is a strongly fluctuating flow on the downstream blade row, especially as a result of the wakes of the upstream blade row. The author investigates the formation of boundary layers under non-stationary flow conditions and tries to establish a model describing the non-stationary boundary layer. For this purpose, plate boundary layers are measured, at constant flow rates but different interferent frequency and variable pressure gradients. By introducing the sample technique, measurements of the non-stationary boundary layer become possible, and the flow rate fluctuation can be divided in its components, i.e. stochastic turbulence and periodical fluctuation. (GL) [de

  18. Improved electron transport layer

    DEFF Research Database (Denmark)

    2012-01-01

    The present invention provides: a method of preparing a coating ink for forming a zinc oxide electron transport layer, comprising mixing zinc acetate and a wetting agent in water or methanol; a coating ink comprising zinc acetate and a wetting agent in aqueous solution or methanolic solution......; a method of preparing a zinc oxide electron transporting layer, which method comprises: i) coating a substrate with the coating ink of the present invention to form a film; ii) drying the film; and iii) heating the dry film to convert the zinc acetate substantially to ZnO; a method of preparing an organic...... photovoltaic device or an organic LED having a zinc oxide electron transport layer, the method comprising, in this order: a) providing a substrate bearing a first electrode layer; b) forming an electron transport layer according to the following method: i) coating a coating ink comprising an ink according...

  19. Study on algorithm of process neural network for soft sensing in sewage disposal system

    Science.gov (United States)

    Liu, Zaiwen; Xue, Hong; Wang, Xiaoyi; Yang, Bin; Lu, Siying

    2006-11-01

    A new method of soft sensing based on process neural network (PNN) for sewage disposal system is represented in the paper. PNN is an extension of traditional neural network, in which the inputs and outputs are time-variation. An aggregation operator is introduced to process neuron, and it makes the neuron network has the ability to deal with the information of space-time two dimensions at the same time, so the data processing enginery of biological neuron is imitated better than traditional neuron. Process neural network with the structure of three layers in which hidden layer is process neuron and input and output are common neurons for soft sensing is discussed. The intelligent soft sensing based on PNN may be used to fulfill measurement of the effluent BOD (Biochemical Oxygen Demand) from sewage disposal system, and a good training result of soft sensing was obtained by the method.

  20. Modeling and prediction of Turkey's electricity consumption using Artificial Neural Networks

    International Nuclear Information System (INIS)

    Kavaklioglu, Kadir; Ozturk, Harun Kemal; Canyurt, Olcay Ersel; Ceylan, Halim

    2009-01-01

    Artificial Neural Networks are proposed to model and predict electricity consumption of Turkey. Multi layer perceptron with backpropagation training algorithm is used as the neural network topology. Tangent-sigmoid and pure-linear transfer functions are selected in the hidden and output layer processing elements, respectively. These input-output network models are a result of relationships that exist among electricity consumption and several other socioeconomic variables. Electricity consumption is modeled as a function of economic indicators such as population, gross national product, imports and exports. It is also modeled using export-import ratio and time input only. Performance comparison among different models is made based on absolute and percentage mean square error. Electricity consumption of Turkey is predicted until 2027 using data from 1975 to 2006 along with other economic indicators. The results show that electricity consumption can be modeled using Artificial Neural Networks, and the models can be used to predict future electricity consumption. (author)

  1. LEARNING ALGORITHM EFFECT ON MULTILAYER FEED FORWARD ARTIFICIAL NEURAL NETWORK PERFORMANCE IN IMAGE CODING

    Directory of Open Access Journals (Sweden)

    OMER MAHMOUD

    2007-08-01

    Full Text Available One of the essential factors that affect the performance of Artificial Neural Networks is the learning algorithm. The performance of Multilayer Feed Forward Artificial Neural Network performance in image compression using different learning algorithms is examined in this paper. Based on Gradient Descent, Conjugate Gradient, Quasi-Newton techniques three different error back propagation algorithms have been developed for use in training two types of neural networks, a single hidden layer network and three hidden layers network. The essence of this study is to investigate the most efficient and effective training methods for use in image compression and its subsequent applications. The obtained results show that the Quasi-Newton based algorithm has better performance as compared to the other two algorithms.

  2. Comparing Two Methods of Neural Networks to Evaluate Dead Oil Viscosity

    Directory of Open Access Journals (Sweden)

    Meysam Dabiri-Atashbeyk

    2018-01-01

    Full Text Available Reservoir characterization and asset management require comprehensive information about formation fluids. In fact, it is not possible to find accurate solutions to many petroleum engineering problems without having accurate pressure-volume-temperature (PVT data. Traditionally, fluid information has been obtained by capturing samples and then by measuring the PVT properties in a laboratory. In recent years, neural network has been applied to a large number of petroleum engineering problems. In this paper, a multi-layer perception neural network and radial basis function network (both optimized by a genetic algorithm were used to evaluate the dead oil viscosity of crude oil, and it was found out that the estimated dead oil viscosity by the multi-layer perception neural network was more accurate than the one obtained by radial basis function network.

  3. Determining the confidence levels of sensor outputs using neural networks

    International Nuclear Information System (INIS)

    Broten, G.S.; Wood, H.C.

    1995-01-01

    This paper describes an approach for determining the confidence level of a sensor output using multi-sensor arrays, sensor fusion and artificial neural networks. The authors have shown in previous work that sensor fusion and artificial neural networks can be used to learn the relationships between the outputs of an array of simulated partially selective sensors and the individual analyte concentrations in a mixture of analyses. Other researchers have shown that an array of partially selective sensors can be used to determine the individual gas concentrations in a gaseous mixture. The research reported in this paper shows that it is possible to extract confidence level information from an array of partially selective sensors using artificial neural networks. The confidence level of a sensor output is defined as a numeric value, ranging from 0% to 100%, that indicates the confidence associated with a output of a given sensor. A three layer back-propagation neural network was trained on a subset of the sensor confidence level space, and was tested for its ability to generalize, where the confidence level space is defined as all possible deviations from the correct sensor output. A learning rate of 0.1 was used and no momentum terms were used in the neural network. This research has shown that an artificial neural network can accurately estimate the confidence level of individual sensors in an array of partially selective sensors. This research has also shown that the neural network's ability to determine the confidence level is influenced by the complexity of the sensor's response and that the neural network is able to estimate the confidence levels even if more than one sensor is in error. The fundamentals behind this research could be applied to other configurations besides arrays of partially selective sensors, such as an array of sensors separated spatially. An example of such a configuration could be an array of temperature sensors in a tank that is not in

  4. Fundamentals of computational intelligence neural networks, fuzzy systems, and evolutionary computation

    CERN Document Server

    Keller, James M; Fogel, David B

    2016-01-01

    This book covers the three fundamental topics that form the basis of computational intelligence: neural networks, fuzzy systems, and evolutionary computation. The text focuses on inspiration, design, theory, and practical aspects of implementing procedures to solve real-world problems. While other books in the three fields that comprise computational intelligence are written by specialists in one discipline, this book is co-written by current former Editor-in-Chief of IEEE Transactions on Neural Networks and Learning Systems, a former Editor-in-Chief of IEEE Transactions on Fuzzy Systems, and the founding Editor-in-Chief of IEEE Transactions on Evolutionary Computation. The coverage across the three topics is both uniform and consistent in style and notation. Discusses single-layer and multilayer neural networks, radial-basi function networks, and recurrent neural networks Covers fuzzy set theory, fuzzy relations, fuzzy logic interference, fuzzy clustering and classification, fuzzy measures and fuzz...

  5. Forecasting SPEI and SPI Drought Indices Using the Integrated Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Petr Maca

    2016-01-01

    Full Text Available The presented paper compares forecast of drought indices based on two different models of artificial neural networks. The first model is based on feedforward multilayer perceptron, sANN, and the second one is the integrated neural network model, hANN. The analyzed drought indices are the standardized precipitation index (SPI and the standardized precipitation evaporation index (SPEI and were derived for the period of 1948–2002 on two US catchments. The meteorological and hydrological data were obtained from MOPEX experiment. The training of both neural network models was made by the adaptive version of differential evolution, JADE. The comparison of models was based on six model performance measures. The results of drought indices forecast, explained by the values of four model performance indices, show that the integrated neural network model was superior to the feedforward multilayer perceptron with one hidden layer of neurons.

  6. Chaos Synchronization Using Adaptive Dynamic Neural Network Controller with Variable Learning Rates

    Directory of Open Access Journals (Sweden)

    Chih-Hong Kao

    2011-01-01

    Full Text Available This paper addresses the synchronization of chaotic gyros with unknown parameters and external disturbance via an adaptive dynamic neural network control (ADNNC system. The proposed ADNNC system is composed of a neural controller and a smooth compensator. The neural controller uses a dynamic RBF (DRBF network to online approximate an ideal controller. The DRBF network can create new hidden neurons online if the input data falls outside the hidden layer and prune the insignificant hidden neurons online if the hidden neuron is inappropriate. The smooth compensator is designed to compensate for the approximation error between the neural controller and the ideal controller. Moreover, the variable learning rates of the parameter adaptation laws are derived based on a discrete-type Lyapunov function to speed up the convergence rate of the tracking error. Finally, the simulation results which verified the chaotic behavior of two nonlinear identical chaotic gyros can be synchronized using the proposed ADNNC scheme.

  7. Identification of Complex Dynamical Systems with Neural Networks (2/2)

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The identification and analysis of high dimensional nonlinear systems is obviously a challenging task. Neural networks have been proven to be universal approximators but this still leaves the identification task a hard one. To do it efficiently, we have to violate some of the rules of classical regression theory. Furthermore we should focus on the interpretation of the resulting model to overcome its black box character. First, we will discuss function approximation with 3 layer feedforward neural networks up to new developments in deep neural networks and deep learning. These nets are not only of interest in connection with image analysis but are a center point of the current artificial intelligence developments. Second, we will focus on the analysis of complex dynamical system in the form of state space models realized as recurrent neural networks. After the introduction of small open dynamical systems we will study dynamical systems on manifolds. Here manifold and dynamics have to be identified in parall...

  8. Identification of Complex Dynamical Systems with Neural Networks (1/2)

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The identification and analysis of high dimensional nonlinear systems is obviously a challenging task. Neural networks have been proven to be universal approximators but this still leaves the identification task a hard one. To do it efficiently, we have to violate some of the rules of classical regression theory. Furthermore we should focus on the interpretation of the resulting model to overcome its black box character. First, we will discuss function approximation with 3 layer feedforward neural networks up to new developments in deep neural networks and deep learning. These nets are not only of interest in connection with image analysis but are a center point of the current artificial intelligence developments. Second, we will focus on the analysis of complex dynamical system in the form of state space models realized as recurrent neural networks. After the introduction of small open dynamical systems we will study dynamical systems on manifolds. Here manifold and dynamics have to be identified in parall...

  9. Real time track finding in a drift chamber with a VLSI neural network

    International Nuclear Information System (INIS)

    Lindsey, C.S.; Denby, B.; Haggerty, H.; Johns, K.

    1992-01-01

    In a test setup, a hardware neural network determined track parameters of charged particles traversing a drift chamber. Voltages proportional to the drift times in 6 cells of the 3-layer chamber were inputs to the Intel ETANN neural network chip which had been trained to give the slope and intercept of tracks. We compare network track parameters to those obtained from off-line track fits. To our knowledge this is the first on-line application of a VLSI neural network to a high energy physics detector. This test explored the potential of the chip and the practical problems of using it in a real world setting. We compare the chip performance to a neural network simulation on a conventional computer. We discuss possible applications of the chip in high energy physics detector triggers. (orig.)

  10. Fault detection and classification in electrical power transmission system using artificial neural network.

    Science.gov (United States)

    Jamil, Majid; Sharma, Sanjeev Kumar; Singh, Rajveer

    2015-01-01

    This paper focuses on the detection and classification of the faults on electrical power transmission line using artificial neural networks. The three phase currents and voltages of one end are taken as inputs in the proposed scheme. The feed forward neural network along with back propagation algorithm has been employed for detection and classification of the fault for analysis of each of the three phases involved in the process. A detailed analysis with varying number of hidden layers has been performed to validate the choice of the neural network. The simulation results concluded that the present method based on the neural network is efficient in detecting and classifying the faults on transmission lines with satisfactory performances. The different faults are simulated with different parameters to check the versatility of the method. The proposed method can be extended to the Distribution network of the Power System. The various simulations and analysis of signals is done in the MATLAB(®) environment.

  11. Robust nonlinear autoregressive moving average model parameter estimation using stochastic recurrent artificial neural networks

    DEFF Research Database (Denmark)

    Chon, K H; Hoyer, D; Armoundas, A A

    1999-01-01

    In this study, we introduce a new approach for estimating linear and nonlinear stochastic autoregressive moving average (ARMA) model parameters, given a corrupt signal, using artificial recurrent neural networks. This new approach is a two-step approach in which the parameters of the deterministic...... part of the stochastic ARMA model are first estimated via a three-layer artificial neural network (deterministic estimation step) and then reestimated using the prediction error as one of the inputs to the artificial neural networks in an iterative algorithm (stochastic estimation step). The prediction...... error is obtained by subtracting the corrupt signal of the estimated ARMA model obtained via the deterministic estimation step from the system output response. We present computer simulation examples to show the efficacy of the proposed stochastic recurrent neural network approach in obtaining accurate...

  12. Artificial neural networks for spatial distribution of fuel assemblies in reload of PWR reactors

    International Nuclear Information System (INIS)

    Oliveira, Edyene; Castro, Victor F.; Velásquez, Carlos E.; Pereira, Claubia

    2017-01-01

    An artificial neural network methodology is being developed in order to find an optimum spatial distribution of the fuel assemblies in a nuclear reactor core during reload. The main bounding parameter of the modelling was the neutron multiplication factor, k ef f . The characteristics of the network are defined by the nuclear parameters: cycle, burnup, enrichment, fuel type, and average power peak of each element. These parameters were obtained by the ORNL nuclear code package SCALE6.0. As for the artificial neural network, the ANN Feedforward Multi L ayer P erceptron with various layers and neurons were constructed. Three algorithms were used and tested: LM (Levenberg-Marquardt), SCG (Scaled Conjugate Gradient) and BayR (Bayesian Regularization). Artificial neural network have implemented using MATLAB 2015a version. As preliminary results, the spatial distribution of the fuel assemblies in the core using a neural network was slightly better than the standard core. (author)

  13. Adiabatic superconducting cells for ultra-low-power artificial neural networks

    Directory of Open Access Journals (Sweden)

    Andrey E. Schegolev

    2016-10-01

    Full Text Available We propose the concept of using superconducting quantum interferometers for the implementation of neural network algorithms with extremely low power dissipation. These adiabatic elements are Josephson cells with sigmoid- and Gaussian-like activation functions. We optimize their parameters for application in three-layer perceptron and radial basis function networks.

  14. Behavioural modelling using the MOESP algorithm, dynamic neural networks and the Bartels-Stewart algorithm

    NARCIS (Netherlands)

    Schilders, W.H.A.; Meijer, P.B.L.; Ciggaar, E.

    2008-01-01

    In this paper we discuss the use of the state-space modelling MOESP algorithm to generate precise information about the number of neurons and hidden layers in dynamic neural networks developed for the behavioural modelling of electronic circuits. The Bartels–Stewart algorithm is used to transform

  15. Folk music style modelling by recurrent neural networks with long short term memory units

    OpenAIRE

    Sturm, Bob; Santos, João Felipe; Korshunova, Iryna

    2015-01-01

    We demonstrate two generative models created by training a recurrent neural network (RNN) with three hidden layers of long short-term memory (LSTM) units. This extends past work in numerous directions, including training deeper models with nearly 24,000 high-level transcriptions of folk tunes. We discuss our on-going work.

  16. Internal measuring models in trained neural networks for parameter estimation from images

    NARCIS (Netherlands)

    Feng, Tian-Jin; Feng, T.J.; Houkes, Z.; Korsten, Maarten J.; Spreeuwers, Lieuwe Jan

    1992-01-01

    The internal representations of 'learned' knowledge in neural networks are still poorly understood, even for backpropagation networks. The paper discusses a possible interpretation of learned knowledge of a network trained for parameter estimation from images. The outputs of the hidden layer are the

  17. Predicting the topology of dynamic neural networks for the simulation of electronic circuits

    NARCIS (Netherlands)

    Schilders, W.H.A.

    2009-01-01

    In this paper we discuss the use of the state-space modelling MOESP algorithm to generate precise information about the number of neurons and hidden layers in dynamic neural networks developed for the behavioural modelling of electronic circuits. The Bartels–Stewart algorithm is used to transform

  18. Artificial Neural Networks to Detect Risk of Type 2 Diabetes | Baha ...

    African Journals Online (AJOL)

    A multilayer feedforward architecture with backpropagation algorithm was designed using Neural Network Toolbox of Matlab. The network was trained using batch mode backpropagation with gradient descent and momentum. Best performed network identified during the training was 2 hidden layers of 6 and 3 neurons, ...

  19. A robust neural network-based approach for microseismic event detection

    KAUST Repository

    Akram, Jubran; Ovcharenko, Oleg; Peter, Daniel

    2017-01-01

    We present an artificial neural network based approach for robust event detection from low S/N waveforms. We use a feed-forward network with a single hidden layer that is tuned on a training dataset and later applied on the entire example dataset

  20. Novel maximum-margin training algorithms for supervised neural networks.

    Science.gov (United States)

    Ludwig, Oswaldo; Nunes, Urbano

    2010-06-01

    This paper proposes three novel training methods, two of them based on the backpropagation approach and a third one based on information theory for multilayer perceptron (MLP) binary classifiers. Both backpropagation methods are based on the maximal-margin (MM) principle. The first one, based on the gradient descent with adaptive learning rate algorithm (GDX) and named maximum-margin GDX (MMGDX), directly increases the margin of the MLP output-layer hyperplane. The proposed method jointly optimizes both MLP layers in a single process, backpropagating the gradient of an MM-based objective function, through the output and hidden layers, in order to create a hidden-layer space that enables a higher margin for the output-layer hyperplane, avoiding the testing of many arbitrary kernels, as occurs in case of support vector machine (SVM) training. The proposed MM-based objective function aims to stretch out the margin to its limit. An objective function based on Lp-norm is also proposed in order to take into account the idea of support vectors, however, overcoming the complexity involved in solving a constrained optimization problem, usually in SVM training. In fact, all the training methods proposed in this paper have time and space complexities O(N) while usual SVM training methods have time complexity O(N (3)) and space complexity O(N (2)) , where N is the training-data-set size. The second approach, named minimization of interclass interference (MICI), has an objective function inspired on the Fisher discriminant analysis. Such algorithm aims to create an MLP hidden output where the patterns have a desirable statistical distribution. In both training methods, the maximum area under ROC curve (AUC) is applied as stop criterion. The third approach offers a robust training framework able to take the best of each proposed training method. The main idea is to compose a neural model by using neurons extracted from three other neural networks, each one previously trained by

  1. Artificial neural network modeling and optimization of ultrahigh pressure extraction of green tea polyphenols.

    Science.gov (United States)

    Xi, Jun; Xue, Yujing; Xu, Yinxiang; Shen, Yuhong

    2013-11-01

    In this study, the ultrahigh pressure extraction of green tea polyphenols was modeled and optimized by a three-layer artificial neural network. A feed-forward neural network trained with an error back-propagation algorithm was used to evaluate the effects of pressure, liquid/solid ratio and ethanol concentration on the total phenolic content of green tea extracts. The neural network coupled with genetic algorithms was also used to optimize the conditions needed to obtain the highest yield of tea polyphenols. The obtained optimal architecture of artificial neural network model involved a feed-forward neural network with three input neurons, one hidden layer with eight neurons and one output layer including single neuron. The trained network gave the minimum value in the MSE of 0.03 and the maximum value in the R(2) of 0.9571, which implied a good agreement between the predicted value and the actual value, and confirmed a good generalization of the network. Based on the combination of neural network and genetic algorithms, the optimum extraction conditions for the highest yield of green tea polyphenols were determined as follows: 498.8 MPa for pressure, 20.8 mL/g for liquid/solid ratio and 53.6% for ethanol concentration. The total phenolic content of the actual measurement under the optimum predicated extraction conditions was 582.4 ± 0.63 mg/g DW, which was well matched with the predicted value (597.2mg/g DW). This suggests that the artificial neural network model described in this work is an efficient quantitative tool to predict the extraction efficiency of green tea polyphenols. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  2. Boosting water oxidation layer-by-layer.

    Science.gov (United States)

    Hidalgo-Acosta, Jonnathan C; Scanlon, Micheál D; Méndez, Manuel A; Amstutz, Véronique; Vrubel, Heron; Opallo, Marcin; Girault, Hubert H

    2016-04-07

    Electrocatalysis of water oxidation was achieved using fluorinated tin oxide (FTO) electrodes modified with layer-by-layer deposited films consisting of bilayers of negatively charged citrate-stabilized IrO2 NPs and positively charged poly(diallyldimethylammonium chloride) (PDDA) polymer. The IrO2 NP surface coverage can be fine-tuned by controlling the number of bilayers. The IrO2 NP films were amorphous, with the NPs therein being well-dispersed and retaining their as-synthesized shape and sizes. UV/vis spectroscopic and spectro-electrochemical studies confirmed that the total surface coverage and electrochemically addressable surface coverage of IrO2 NPs increased linearly with the number of bilayers up to 10 bilayers. The voltammetry of the modified electrode was that of hydrous iridium oxide films (HIROFs) with an observed super-Nernstian pH response of the Ir(III)/Ir(IV) and Ir(IV)-Ir(IV)/Ir(IV)-Ir(V) redox transitions and Nernstian shift of the oxygen evolution onset potential. The overpotential of the oxygen evolution reaction (OER) was essentially pH independent, varying only from 0.22 V to 0.28 V (at a current density of 0.1 mA cm(-2)), moving from acidic to alkaline conditions. Bulk electrolysis experiments revealed that the IrO2/PDDA films were stable and adherent under acidic and neutral conditions but degraded in alkaline solutions. Oxygen was evolved with Faradaic efficiencies approaching 100% under acidic (pH 1) and neutral (pH 7) conditions, and 88% in alkaline solutions (pH 13). This layer-by-layer approach forms the basis of future large-scale OER electrode development using ink-jet printing technology.

  3. Modeling by artificial neural networks. Application to the management of fuel in a nuclear power plant

    International Nuclear Information System (INIS)

    Gaudier, F.

    1999-01-01

    The determination of the family of optimum core loading patterns for Pressurized Water Reactors (PWRs) involves the assessment of the core attributes, such as the power peaking factor for thousands of candidate loading patterns. Despite the rapid advances in computer architecture, the direct calculation of these attributes by a neutronic code needs a lot of of time and memory. With the goal of reducing the calculation time and optimizing the loading pattern, we propose in this thesis a method based on ideas of neural and statistical learning to provide a feed forward neural network capable of calculating the power peaking corresponding to an eighth core PWR. We use statistical methods to deduct judicious inputs (reduction of the input space dimension) and neural methods to train the model (learning capabilities). Indeed, on one hand, a principal component analysis allows us to characterize more efficiently the fuel assemblies (neural model inputs) and the other hand, the introduction of the a priori knowledge allows us to reducing the number of freedom parameters in the neural network. The model was built using a multi layered perceptron trained with the standard back propagation algorithm. We introduced our neural network in the automatic optimization code FORMOSA, and on EDF real problems we showed an important saving in time. Finally, we propose an hybrid method which combining the best characteristics of the linear local approximator GPT (Generalized Perturbation Theory) and the artificial neural network. (author)

  4. Modified-hybrid optical neural network filter for multiple object recognition within cluttered scenes

    Science.gov (United States)

    Kypraios, Ioannis; Young, Rupert C. D.; Chatwin, Chris R.

    2009-08-01

    Motivated by the non-linear interpolation and generalization abilities of the hybrid optical neural network filter between the reference and non-reference images of the true-class object we designed the modifiedhybrid optical neural network filter. We applied an optical mask to the hybrid optical neural network's filter input. The mask was built with the constant weight connections of a randomly chosen image included in the training set. The resulted design of the modified-hybrid optical neural network filter is optimized for performing best in cluttered scenes of the true-class object. Due to the shift invariance properties inherited by its correlator unit the filter can accommodate multiple objects of the same class to be detected within an input cluttered image. Additionally, the architecture of the neural network unit of the general hybrid optical neural network filter allows the recognition of multiple objects of different classes within the input cluttered image by modifying the output layer of the unit. We test the modified-hybrid optical neural network filter for multiple objects of the same and of different classes' recognition within cluttered input images and video sequences of cluttered scenes. The filter is shown to exhibit with a single pass over the input data simultaneously out-of-plane rotation, shift invariance and good clutter tolerance. It is able to successfully detect and classify correctly the true-class objects within background clutter for which there has been no previous training.

  5. Integration of Neural Networks and Cellular Automata for Urban Planning

    Institute of Scientific and Technical Information of China (English)

    Anthony Gar-on Yeh; LI Xia

    2004-01-01

    This paper presents a new type of cellular automata (CA) model for the simulation of alternative land development using neural networks for urban planning. CA models can be regarded as a planning tool because they can generate alternative urban growth. Alternative development patterns can be formed by using different sets of parameter values in CA simulation. A critical issue is how to define parameter values for realistic and idealized simulation. This paper demonstrates that neural networks can simplify CA models but generate more plausible results. The simulation is based on a simple three-layer network with an output neuron to generate conversion probability. No transition rules are required for the simulation. Parameter values are automatically obtained from the training of network by using satellite remote sensing data. Original training data can be assessed and modified according to planning objectives. Alternative urban patterns can be easily formulated by using the modified training data sets rather than changing the model.

  6. Neural PID Control Strategy for Networked Process Control

    Directory of Open Access Journals (Sweden)

    Jianhua Zhang

    2013-01-01

    Full Text Available A new method with a two-layer hierarchy is presented based on a neural proportional-integral-derivative (PID iterative learning method over the communication network for the closed-loop automatic tuning of a PID controller. It can enhance the performance of the well-known simple PID feedback control loop in the local field when real networked process control applied to systems with uncertain factors, such as external disturbance or randomly delayed measurements. The proposed PID iterative learning method is implemented by backpropagation neural networks whose weights are updated via minimizing tracking error entropy of closed-loop systems. The convergence in the mean square sense is analysed for closed-loop networked control systems. To demonstrate the potential applications of the proposed strategies, a pressure-tank experiment is provided to show the usefulness and effectiveness of the proposed design method in network process control systems.

  7. Relabeling exchange method (REM) for learning in neural networks

    Science.gov (United States)

    Wu, Wen; Mammone, Richard J.

    1994-02-01

    The supervised training of neural networks require the use of output labels which are usually arbitrarily assigned. In this paper it is shown that there is a significant difference in the rms error of learning when `optimal' label assignment schemes are used. We have investigated two efficient random search algorithms to solve the relabeling problem: the simulated annealing and the genetic algorithm. However, we found them to be computationally expensive. Therefore we shall introduce a new heuristic algorithm called the Relabeling Exchange Method (REM) which is computationally more attractive and produces optimal performance. REM has been used to organize the optimal structure for multi-layered perceptrons and neural tree networks. The method is a general one and can be implemented as a modification to standard training algorithms. The motivation of the new relabeling strategy is based on the present interpretation of dyslexia as an encoding problem.

  8. Evolutionary neural network modeling for software cumulative failure time prediction

    International Nuclear Information System (INIS)

    Tian Liang; Noore, Afzel

    2005-01-01

    An evolutionary neural network modeling approach for software cumulative failure time prediction based on multiple-delayed-input single-output architecture is proposed. Genetic algorithm is used to globally optimize the number of the delayed input neurons and the number of neurons in the hidden layer of the neural network architecture. Modification of Levenberg-Marquardt algorithm with Bayesian regularization is used to improve the ability to predict software cumulative failure time. The performance of our proposed approach has been compared using real-time control and flight dynamic application data sets. Numerical results show that both the goodness-of-fit and the next-step-predictability of our proposed approach have greater accuracy in predicting software cumulative failure time compared to existing approaches

  9. Fractional Hopfield Neural Networks: Fractional Dynamic Associative Recurrent Neural Networks.

    Science.gov (United States)

    Pu, Yi-Fei; Yi, Zhang; Zhou, Ji-Liu

    2017-10-01

    This paper mainly discusses a novel conceptual framework: fractional Hopfield neural networks (FHNN). As is commonly known, fractional calculus has been incorporated into artificial neural networks, mainly because of its long-term memory and nonlocality. Some researchers have made interesting attempts at fractional neural networks and gained competitive advantages over integer-order neural networks. Therefore, it is naturally makes one ponder how to generalize the first-order Hopfield neural networks to the fractional-order ones, and how to implement FHNN by means of fractional calculus. We propose to introduce a novel mathematical method: fractional calculus to implement FHNN. First, we implement fractor in the form of an analog circuit. Second, we implement FHNN by utilizing fractor and the fractional steepest descent approach, construct its Lyapunov function, and further analyze its attractors. Third, we perform experiments to analyze the stability and convergence of FHNN, and further discuss its applications to the defense against chip cloning attacks for anticounterfeiting. The main contribution of our work is to propose FHNN in the form of an analog circuit by utilizing a fractor and the fractional steepest descent approach, construct its Lyapunov function, prove its Lyapunov stability, analyze its attractors, and apply FHNN to the defense against chip cloning attacks for anticounterfeiting. A significant advantage of FHNN is that its attractors essentially relate to the neuron's fractional order. FHNN possesses the fractional-order-stability and fractional-order-sensitivity characteristics.

  10. Design of a Thermoacoustic Sensor for Low Intensity Ultrasound Measurements Based on an Artificial Neural Network.

    Science.gov (United States)

    Xing, Jida; Chen, Jie

    2015-06-23

    In therapeutic ultrasound applications, accurate ultrasound output intensities are crucial because the physiological effects of therapeutic ultrasound are very sensitive to the intensity and duration of these applications. Although radiation force balance is a benchmark technique for measuring ultrasound intensity and power, it is costly, difficult to operate, and compromised by noise vibration. To overcome these limitations, the development of a low-cost, easy to operate, and vibration-resistant alternative device is necessary for rapid ultrasound intensity measurement. Therefore, we proposed and validated a novel two-layer thermoacoustic sensor using an artificial neural network technique to accurately measure low ultrasound intensities between 30 and 120 mW/cm2. The first layer of the sensor design is a cylindrical absorber made of plexiglass, followed by a second layer composed of polyurethane rubber with a high attenuation coefficient to absorb extra ultrasound energy. The sensor determined ultrasound intensities according to a temperature elevation induced by heat converted from incident acoustic energy. Compared with our previous one-layer sensor design, the new two-layer sensor enhanced the ultrasound absorption efficiency to provide more rapid and reliable measurements. Using a three-dimensional model in the K-wave toolbox, our simulation of the ultrasound propagation process demonstrated that the two-layer design is more efficient than the single layer design. We also integrated an artificial neural network algorithm to compensate for the large measurement offset. After obtaining multiple parameters of the sensor characteristics through calibration, the artificial neural network is built to correct temperature drifts and increase the reliability of our thermoacoustic measurements through iterative training about ten seconds. The performance of the artificial neural network method was validated through a series of experiments. Compared to our previous

  11. The Application of Layer Theory to Design: The Control Layer

    Science.gov (United States)

    Gibbons, Andrew S.; Langton, Matthew B.

    2016-01-01

    A theory of design layers proposed by Gibbons ("An Architectural Approach to Instructional Design." Routledge, New York, 2014) asserts that each layer of an instructional design is related to a body of theory closely associated with the concerns of that particular layer. This study focuses on one layer, the control layer, examining…

  12. The Dissolved Oxygen Prediction Method Based on Neural Network

    Directory of Open Access Journals (Sweden)

    Zhong Xiao

    2017-01-01

    Full Text Available The dissolved oxygen (DO is oxygen dissolved in water, which is an important factor for the aquaculture. Using BP neural network method with the combination of purelin, logsig, and tansig activation functions is proposed for the prediction of aquaculture’s dissolved oxygen. The input layer, hidden layer, and output layer are introduced in detail including the weight adjustment process. The breeding data of three ponds in actual 10 consecutive days were used for experiments; these ponds were located in Beihai, Guangxi, a traditional aquaculture base in southern China. The data of the first 7 days are used for training, and the data of the latter 3 days are used for the test. Compared with the common prediction models, curve fitting (CF, autoregression (AR, grey model (GM, and support vector machines (SVM, the experimental results show that the prediction accuracy of the neural network is the highest, and all the predicted values are less than 5% of the error limit, which can meet the needs of practical applications, followed by AR, GM, SVM, and CF. The prediction model can help to improve the water quality monitoring level of aquaculture which will prevent the deterioration of water quality and the outbreak of disease.

  13. Residual Deep Convolutional Neural Network Predicts MGMT Methylation Status.

    Science.gov (United States)

    Korfiatis, Panagiotis; Kline, Timothy L; Lachance, Daniel H; Parney, Ian F; Buckner, Jan C; Erickson, Bradley J

    2017-10-01

    Predicting methylation of the O6-methylguanine methyltransferase (MGMT) gene status utilizing MRI imaging is of high importance since it is a predictor of response and prognosis in brain tumors. In this study, we compare three different residual deep neural network (ResNet) architectures to evaluate their ability in predicting MGMT methylation status without the need for a distinct tumor segmentation step. We found that the ResNet50 (50 layers) architecture was the best performing model, achieving an accuracy of 94.90% (+/- 3.92%) for the test set (classification of a slice as no tumor, methylated MGMT, or non-methylated). ResNet34 (34 layers) achieved 80.72% (+/- 13.61%) while ResNet18 (18 layers) accuracy was 76.75% (+/- 20.67%). ResNet50 performance was statistically significantly better than both ResNet18 and ResNet34 architectures (p deep neural architectures can be used to predict molecular biomarkers from routine medical images.

  14. Neural network analysis in pharmacogenetics of mood disorders

    Directory of Open Access Journals (Sweden)

    Serretti Alessandro

    2004-12-01

    Full Text Available Abstract Background The increasing number of available genotypes for genetic studies in humans requires more advanced techniques of analysis. We previously reported significant univariate associations between gene polymorphisms and antidepressant response in mood disorders. However the combined analysis of multiple gene polymorphisms and clinical variables requires the use of non linear methods. Methods In the present study we tested a neural network strategy for a combined analysis of two gene polymorphisms. A Multi Layer Perceptron model showed the best performance and was therefore selected over the other networks. One hundred and twenty one depressed inpatients treated with fluvoxamine in the context of previously reported pharmacogenetic studies were included. The polymorphism in the transcriptional control region upstream of the 5HTT coding sequence (SERTPR and in the Tryptophan Hydroxylase (TPH gene were analysed simultaneously. Results A multi layer perceptron network composed by 1 hidden layer with 7 nodes was chosen. 77.5 % of responders and 51.2% of non responders were correctly classified (ROC area = 0.731 – empirical p value = 0.0082. Finally, we performed a comparison with traditional techniques. A discriminant function analysis correctly classified 34.1 % of responders and 68.1 % of non responders (F = 8.16 p = 0.0005. Conclusions Overall, our findings suggest that neural networks may be a valid technique for the analysis of gene polymorphisms in pharmacogenetic studies. The complex interactions modelled through NN may be eventually applied at the clinical level for the individualized therapy.

  15. Topologically nontrivial quantum layers

    International Nuclear Information System (INIS)

    Carron, G.; Exner, P.; Krejcirik, D.

    2004-01-01

    Given a complete noncompact surface Σ embedded in R 3 , we consider the Dirichlet Laplacian in the layer Ω that is defined as a tubular neighborhood of constant width about Σ. Using an intrinsic approach to the geometry of Ω, we generalize the spectral results of the original paper by Duclos et al. [Commun. Math. Phys. 223, 13 (2001)] to the situation when Σ does not possess poles. This enables us to consider topologically more complicated layers and state new spectral results. In particular, we are interested in layers built over surfaces with handles or several cylindrically symmetric ends. We also discuss more general regions obtained by compact deformations of certain Ω

  16. Neural network regulation driven by autonomous neural firings

    Science.gov (United States)

    Cho, Myoung Won

    2016-07-01

    Biological neurons naturally fire spontaneously due to the existence of a noisy current. Such autonomous firings may provide a driving force for network formation because synaptic connections can be modified due to neural firings. Here, we study the effect of autonomous firings on network formation. For the temporally asymmetric Hebbian learning, bidirectional connections lose their balance easily and become unidirectional ones. Defining the difference between reciprocal connections as new variables, we could express the learning dynamics as if Ising model spins interact with each other in magnetism. We present a theoretical method to estimate the interaction between the new variables in a neural system. We apply the method to some network systems and find some tendencies of autonomous neural network regulation.

  17. Accurate prediction of the dew points of acidic combustion gases by using an artificial neural network model

    International Nuclear Information System (INIS)

    ZareNezhad, Bahman; Aminian, Ali

    2011-01-01

    This paper presents a new approach based on using an artificial neural network (ANN) model for predicting the acid dew points of the combustion gases in process and power plants. The most important acidic combustion gases namely, SO 3 , SO 2 , NO 2 , HCl and HBr are considered in this investigation. Proposed Network is trained using the Levenberg-Marquardt back propagation algorithm and the hyperbolic tangent sigmoid activation function is applied to calculate the output values of the neurons of the hidden layer. According to the network's training, validation and testing results, a three layer neural network with nine neurons in the hidden layer is selected as the best architecture for accurate prediction of the acidic combustion gases dew points over wide ranges of acid and moisture concentrations. The proposed neural network model can have significant application in predicting the condensation temperatures of different acid gases to mitigate the corrosion problems in stacks, pollution control devices and energy recovery systems.

  18. Non-invasive determination of the absorption coefficient of the brain from time-resolved reflectance using a neural network

    International Nuclear Information System (INIS)

    Jaeger, Marion; Kienle, Alwin

    2011-01-01

    We investigated the performance of a neural network for derivation of the absorption coefficient of the brain from simulated non-invasive time-resolved reflectance measurements on the head. A five-layered geometry was considered assuming that the optical properties (except the absorption coefficient of the brain) and the thickness of all layers were known with an uncertainty. A solution of the layered diffusion equation was used to train the neural network. We determined the absorption coefficient of the brain with an RMS error of <6% from reflectance data at a single distance calculated by diffusion theory. By applying the neural network to reflectance curves obtained from Monte Carlo simulations, similar errors were found. (note)

  19. Arctic Mixed Layer Dynamics

    National Research Council Canada - National Science Library

    Morison, James

    2003-01-01

    .... Over the years we have sought to understand the heat and mass balance of the mixed layer, marginal ice zone processes, the Arctic internal wave and mixing environment, summer and winter leads, and convection...

  20. Layered inorganic solids

    Czech Academy of Sciences Publication Activity Database

    Čejka, Jiří; Morris, R. E.; Nachtigall, P.; Roth, Wieslaw Jerzy

    2014-01-01

    Roč. 43, č. 27 (2014), s. 10274-10275 ISSN 1477-9226 Institutional support: RVO:61388955 Keywords : layered inorganic solids * physical chemistry * catalysis Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 4.197, year: 2014

  1. Addressing Ozone Layer Depletion

    Science.gov (United States)

    Access information on EPA's efforts to address ozone layer depletion through regulations, collaborations with stakeholders, international treaties, partnerships with the private sector, and enforcement actions under Title VI of the Clean Air Act.

  2. Layered Fault Management Architecture

    National Research Council Canada - National Science Library

    Sztipanovits, Janos

    2004-01-01

    ... UAVs or Organic Air Vehicles. The approach of this effort was to analyze fault management requirements of formation flight for fleets of UAVs, and develop a layered fault management architecture which demonstrates significant...

  3. The Bottom Boundary Layer.

    Science.gov (United States)

    Trowbridge, John H; Lentz, Steven J

    2018-01-03

    The oceanic bottom boundary layer extracts energy and momentum from the overlying flow, mediates the fate of near-bottom substances, and generates bedforms that retard the flow and affect benthic processes. The bottom boundary layer is forced by winds, waves, tides, and buoyancy and is influenced by surface waves, internal waves, and stratification by heat, salt, and suspended sediments. This review focuses on the coastal ocean. The main points are that (a) classical turbulence concepts and modern turbulence parameterizations provide accurate representations of the structure and turbulent fluxes under conditions in which the underlying assumptions hold, (b) modern sensors and analyses enable high-quality direct or near-direct measurements of the turbulent fluxes and dissipation rates, and (c) the remaining challenges include the interaction of waves and currents with the erodible seabed, the impact of layer-scale two- and three-dimensional instabilities, and the role of the bottom boundary layer in shelf-slope exchange.

  4. The Bottom Boundary Layer

    Science.gov (United States)

    Trowbridge, John H.; Lentz, Steven J.

    2018-01-01

    The oceanic bottom boundary layer extracts energy and momentum from the overlying flow, mediates the fate of near-bottom substances, and generates bedforms that retard the flow and affect benthic processes. The bottom boundary layer is forced by winds, waves, tides, and buoyancy and is influenced by surface waves, internal waves, and stratification by heat, salt, and suspended sediments. This review focuses on the coastal ocean. The main points are that (a) classical turbulence concepts and modern turbulence parameterizations provide accurate representations of the structure and turbulent fluxes under conditions in which the underlying assumptions hold, (b) modern sensors and analyses enable high-quality direct or near-direct measurements of the turbulent fluxes and dissipation rates, and (c) the remaining challenges include the interaction of waves and currents with the erodible seabed, the impact of layer-scale two- and three-dimensional instabilities, and the role of the bottom boundary layer in shelf-slope exchange.

  5. An Efficient Supervised Training Algorithm for Multilayer Spiking Neural Networks.

    Science.gov (United States)

    Xie, Xiurui; Qu, Hong; Liu, Guisong; Zhang, Malu; Kurths, Jürgen

    2016-01-01

    The spiking neural networks (SNNs) are the third generation of neural networks and perform remarkably well in cognitive tasks such as pattern recognition. The spike emitting and information processing mechanisms found in biological cognitive systems motivate the application of the hierarchical structure and temporal encoding mechanism in spiking neural networks, which have exhibited strong computational capability. However, the hierarchical structure and temporal encoding approach require neurons to process information serially in space and time respectively, which reduce the training efficiency significantly. For training the hierarchical SNNs, most existing methods are based on the traditional back-propagation algorithm, inheriting its drawbacks of the gradient diffusion and the sensitivity on parameters. To keep the powerful computation capability of the hierarchical structure and temporal encoding mechanism, but to overcome the low efficiency of the existing algorithms, a new training algorithm, the Normalized Spiking Error Back Propagation (NSEBP) is proposed in this paper. In the feedforward calculation, the output spike times are calculated by solving the quadratic function in the spike response model instead of detecting postsynaptic voltage states at all time points in traditional algorithms. Besides, in the feedback weight modification, the computational error is propagated to previous layers by the presynaptic spike jitter instead of the gradient decent rule, which realizes the layer-wised training. Furthermore, our algorithm investigates the mathematical relation between the weight variation and voltage error change, which makes the normalization in the weight modification applicable. Adopting these strategies, our algorithm outperforms the traditional SNN multi-layer algorithms in terms of learning efficiency and parameter sensitivity, that are also demonstrated by the comprehensive experimental results in this paper.

  6. Application of RBF neural network improved by peak density function in intelligent color matching of wood dyeing

    International Nuclear Information System (INIS)

    Guan, Xuemei; Zhu, Yuren; Song, Wenlong

    2016-01-01

    According to the characteristics of wood dyeing, we propose a predictive model of pigment formula for wood dyeing based on Radial Basis Function (RBF) neural network. In practical application, however, it is found that the number of neurons in the hidden layer of RBF neural network is difficult to determine. In general, we need to test several times according to experience and prior knowledge, which is lack of a strict design procedure on theoretical basis. And we also don’t know whether the RBF neural network is convergent. This paper proposes a peak density function to determine the number of neurons in the hidden layer. In contrast to existing approaches, the centers and the widths of the radial basis function are initialized by extracting the features of samples. So the uncertainty caused by random number when initializing the training parameters and the topology of RBF neural network is eliminated. The average relative error of the original RBF neural network is 1.55% in 158 epochs. However, the average relative error of the RBF neural network which is improved by peak density function is only 0.62% in 50 epochs. Therefore, the convergence rate and approximation precision of the RBF neural network are improved significantly.

  7. Neural networks and statistical learning

    CERN Document Server

    Du, Ke-Lin

    2014-01-01

    Providing a broad but in-depth introduction to neural network and machine learning in a statistical framework, this book provides a single, comprehensive resource for study and further research. All the major popular neural network models and statistical learning approaches are covered with examples and exercises in every chapter to develop a practical working understanding of the content. Each of the twenty-five chapters includes state-of-the-art descriptions and important research results on the respective topics. The broad coverage includes the multilayer perceptron, the Hopfield network, associative memory models, clustering models and algorithms, the radial basis function network, recurrent neural networks, principal component analysis, nonnegative matrix factorization, independent component analysis, discriminant analysis, support vector machines, kernel methods, reinforcement learning, probabilistic and Bayesian networks, data fusion and ensemble learning, fuzzy sets and logic, neurofuzzy models, hardw...

  8. Neural networks in signal processing

    International Nuclear Information System (INIS)

    Govil, R.

    2000-01-01

    Nuclear Engineering has matured during the last decade. In research and design, control, supervision, maintenance and production, mathematical models and theories are used extensively. In all such applications signal processing is embedded in the process. Artificial Neural Networks (ANN), because of their nonlinear, adaptive nature are well suited to such applications where the classical assumptions of linearity and second order Gaussian noise statistics cannot be made. ANN's can be treated as nonparametric techniques, which can model an underlying process from example data. They can also adopt their model parameters to statistical change with time. Algorithms in the framework of Neural Networks in Signal processing have found new applications potentials in the field of Nuclear Engineering. This paper reviews the fundamentals of Neural Networks in signal processing and their applications in tasks such as recognition/identification and control. The topics covered include dynamic modeling, model based ANN's, statistical learning, eigen structure based processing and generalization structures. (orig.)

  9. Principles of neural information processing

    CERN Document Server

    Seelen, Werner v

    2016-01-01

    In this fundamental book the authors devise a framework that describes the working of the brain as a whole. It presents a comprehensive introduction to the principles of Neural Information Processing as well as recent and authoritative research. The books´ guiding principles are the main purpose of neural activity, namely, to organize behavior to ensure survival, as well as the understanding of the evolutionary genesis of the brain. Among the developed principles and strategies belong self-organization of neural systems, flexibility, the active interpretation of the world by means of construction and prediction as well as their embedding into the world, all of which form the framework of the presented description. Since, in brains, their partial self-organization, the lifelong adaptation and their use of various methods of processing incoming information are all interconnected, the authors have chosen not only neurobiology and evolution theory as a basis for the elaboration of such a framework, but also syst...

  10. IMPLEMENTASI BACKPROPAGATION NEURAL NETWORK DALAM PRAKIRAAN CUACA DI DAERAH BALI SELATAN

    Directory of Open Access Journals (Sweden)

    I MADE DWI UDAYANA PUTRA

    2016-11-01

    Full Text Available Weather information has an important role in human life in various fields, such as agriculture, marine, and aviation. The accurate weather forecasts are needed in order to improve the performance of various fields. In this study, use artificial neural network method with backpropagation learning algorithm to create a model of weather forecasting in the area of ??South Bali. The aim of this study is to determine the effect of the number of neurons in the hidden layer and to determine the level of accuracy of the method of artificial neural network with backpropagation learning algorithm in weather forecast models. Weather forecast models in this study use input of the factors that influence the weather, namely air temperature, dew point, wind speed, visibility, and barometric pressure.The results of testing the network with a different number of neurons in the hidden layer of artificial neural network method with backpropagation learning algorithms show that the increase in the number of neurons in the hidden layer is not directly proportional to the value of the accuracy of the weather forecasts, the increase in the number of neurons in the hidden layer does not necessarily increase or decrease value accuracy of weather forecasts we obtain the best accuracy rate of 51.6129% on a network model with three neurons in the hidden layer.

  11. A Neural Network Approach for GMA Butt Joint Welding

    DEFF Research Database (Denmark)

    Christensen, Kim Hardam; Sørensen, Torben

    2003-01-01

    This paper describes the application of the neural network technology for gas metal arc welding (GMAW) control. A system has been developed for modeling and online adjustment of welding parameters, appropriate to guarantee a certain degree of quality in the field of butt joint welding with full...... penetration, when the gap width is varying during the welding process. The process modeling to facilitate the mapping from joint geometry and reference weld quality to significant welding parameters has been based on a multi-layer feed-forward network. The Levenberg-Marquardt algorithm for non-linear least...

  12. NEURAL NETWORK SYSTEM FOR DIAGNOSTICS OF AVIATION DESIGNATION PRODUCTS

    Directory of Open Access Journals (Sweden)

    В. Єременко

    2011-02-01

    Full Text Available In the article for solving the classification problem of the technical state of the  object, proposed to use a hybrid neural network with a Kohonen layer and multilayer perceptron. The information-measuring system can be used for standardless diagnostics, cluster analysis and to classify the products which made from composite materials. The advantage of this architecture is flexibility, high performance, ability to use different methods for collecting diagnostic information about unit under test, high reliability of information processing

  13. New approach to ECG's features recognition involving neural network

    International Nuclear Information System (INIS)

    Babloyantz, A.; Ivanov, V.V.; Zrelov, P.V.

    2001-01-01

    A new approach for the detection of slight changes in the form of the ECG signal is proposed. It is based on the approximation of raw ECG data inside each RR-interval by the expansion in polynomials of special type and on the classification of samples represented by sets of expansion coefficients using a layered feed-forward neural network. The transformation applied provides significantly simpler data structure, stability to noise and to other accidental factors. A by-product of the method is the compression of ECG data with factor 5

  14. Deep Convolutional Neural Networks: Structure, Feature Extraction and Training

    Directory of Open Access Journals (Sweden)

    Namatēvs Ivars

    2017-12-01

    Full Text Available Deep convolutional neural networks (CNNs are aimed at processing data that have a known network like topology. They are widely used to recognise objects in images and diagnose patterns in time series data as well as in sensor data classification. The aim of the paper is to present theoretical and practical aspects of deep CNNs in terms of convolution operation, typical layers and basic methods to be used for training and learning. Some practical applications are included for signal and image classification. Finally, the present paper describes the proposed block structure of CNN for classifying crucial features from 3D sensor data.

  15. Traffic sign classification with dataset augmentation and convolutional neural network

    Science.gov (United States)

    Tang, Qing; Kurnianggoro, Laksono; Jo, Kang-Hyun

    2018-04-01

    This paper presents a method for traffic sign classification using a convolutional neural network (CNN). In this method, firstly we transfer a color image into grayscale, and then normalize it in the range (-1,1) as the preprocessing step. To increase robustness of classification model, we apply a dataset augmentation algorithm and create new images to train the model. To avoid overfitting, we utilize a dropout module before the last fully connection layer. To assess the performance of the proposed method, the German traffic sign recognition benchmark (GTSRB) dataset is utilized. Experimental results show that the method is effective in classifying traffic signs.

  16. Very deep recurrent convolutional neural network for object recognition

    Science.gov (United States)

    Brahimi, Sourour; Ben Aoun, Najib; Ben Amar, Chokri

    2017-03-01

    In recent years, Computer vision has become a very active field. This field includes methods for processing, analyzing, and understanding images. The most challenging problems in computer vision are image classification and object recognition. This paper presents a new approach for object recognition task. This approach exploits the success of the Very Deep Convolutional Neural Network for object recognition. In fact, it improves the convolutional layers by adding recurrent connections. This proposed approach was evaluated on two object recognition benchmarks: Pascal VOC 2007 and CIFAR-10. The experimental results prove the efficiency of our method in comparison with the state of the art methods.

  17. A Neural Network Approach for GMA Butt Joint Welding

    DEFF Research Database (Denmark)

    Christensen, Kim Hardam; Sørensen, Torben

    2003-01-01

    penetration, when the gap width is varying during the welding process. The process modeling to facilitate the mapping from joint geometry and reference weld quality to significant welding parameters has been based on a multi-layer feed-forward network. The Levenberg-Marquardt algorithm for non-linear least......This paper describes the application of the neural network technology for gas metal arc welding (GMAW) control. A system has been developed for modeling and online adjustment of welding parameters, appropriate to guarantee a certain degree of quality in the field of butt joint welding with full...

  18. Neural network diagnosis of avascular necrosis from magnetic resonance images

    Science.gov (United States)

    Manduca, Armando; Christy, Paul S.; Ehman, Richard L.

    1993-09-01

    We have explored the use of artificial neural networks to diagnose avascular necrosis (AVN) of the femoral head from magnetic resonance images. We have developed multi-layer perceptron networks, trained with conjugate gradient optimization, which diagnose AVN from single sagittal images of the femoral head with 100% accuracy on the training data and 97% accuracy on test data. These networks use only the raw image as input (with minimal preprocessing to average the images down to 32 X 32 size and to scale the input data values) and learn to extract their own features for the diagnosis decision. Various experiments with these networks are described.

  19. Estimation of Solar Radiation using Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Slamet Suprayogi

    2004-01-01

    Full Text Available The solar radiation is the most important fator affeccting evapotranspiration, the mechanism of transporting the vapor from the water surface has also a great effect. The main objectives of this study were to investigate the potential of using Artificial Neural Network (ANN to predict solar radiation related to temperature. The three-layer backpropagation were developed, trained, and tested to forecast solar radiation for Ciriung sub Cachment. Result revealed that the ANN were able to well learn the events they were trained to recognize. Moreover, they were capable of effecctively generalize their training by predicting solar radiation for sets unseen cases.

  20. Inductive differentiation of two neural lineages reconstituted in a microculture system from Xenopus early gastrula cells.

    Science.gov (United States)

    Mitani, S; Okamoto, H

    1991-05-01

    Neural induction of ectoderm cells has been reconstituted and examined in a microculture system derived from dissociated early gastrula cells of Xenopus laevis. We have used monoclonal antibodies as specific markers to monitor cellular differentiation from three distinct ectoderm lineages in culture (N1 for CNS neurons from neural tube, Me1 for melanophores from neural crest and E3 for skin epidermal cells from epidermal lineages). CNS neurons and melanophores differentiate when deep layer cells of the ventral ectoderm (VE, prospective epidermis region; 150 cells/culture) and an appropriate region of the marginal zone (MZ, prospective mesoderm region; 5-150 cells/culture) are co-cultured, but not in cultures of either cell type on their own; VE cells cultured alone yield epidermal cells as we have previously reported. The extent of inductive neural differentiation in the co-culture system strongly depends on the origin and number of MZ cells initially added to culture wells. The potency to induce CNS neurons is highest for dorsal MZ cells and sharply decreases as more ventrally located cells are used. The same dorsoventral distribution of potency is seen in the ability of MZ cells to inhibit epidermal differentiation. In contrast, the ability of MZ cells to induce melanophores shows the reverse polarity, ventral to dorsal. These data indicate that separate developmental mechanisms are used for the induction of neural tube and neural crest lineages. Co-differentiation of CNS neurons or melanophores with epidermal cells can be obtained in a single well of co-cultures of VE cells (150) and a wide range of numbers of MZ cells (5 to 100). Further, reproducible differentiation of both neural lineages requires intimate association between cells from the two gastrula regions; virtually no differentiation is obtained when cells from the VE and MZ are separated in a culture well. These results indicate that the inducing signals from MZ cells for both neural tube and neural

  1. Comparing the Selected Transfer Functions and Local Optimization Methods for Neural Network Flood Runoff Forecast

    Directory of Open Access Journals (Sweden)

    Petr Maca

    2014-01-01

    Full Text Available The presented paper aims to analyze the influence of the selection of transfer function and training algorithms on neural network flood runoff forecast. Nine of the most significant flood events, caused by the extreme rainfall, were selected from 10 years of measurement on small headwater catchment in the Czech Republic, and flood runoff forecast was investigated using the extensive set of multilayer perceptrons with one hidden layer of neurons. The analyzed artificial neural network models with 11 different activation functions in hidden layer were trained using 7 local optimization algorithms. The results show that the Levenberg-Marquardt algorithm was superior compared to the remaining tested local optimization methods. When comparing the 11 nonlinear transfer functions, used in hidden layer neurons, the RootSig function was superior compared to the rest of analyzed activation functions.

  2. ACOUSTIC CLASSIFICATION OF FRESHWATER FISH SPECIES USING ARTIFICIAL NEURAL NETWORK: EVALUATION OF THE MODEL PERFORMANCE

    Directory of Open Access Journals (Sweden)

    Zulkarnaen Fahmi

    2013-06-01

    Full Text Available Hydroacoustic techniques are a valuable tool for the stock assessments of many fish species. Nonetheless, such techniques are limited by problems of species identification. Several methods and techniques have been used in addressing the problem of acoustic identification species and one of them is Artificial Neural Networks (ANNs. In this paper, Back propagation (BP and Multi Layer Perceptron (MLP of the Artificial Neural Network were used to classify carp (Cyprinus carpio, tilapia (Oreochromis niloticus, and catfish (Pangasius hypothalmus. Classification was done using a set of descriptors extracted from the acoustic data records, i.e. Volume Back scattering (Sv, Target Strength (TS, Area Back scattering Strength, Skewness, Kurtosis, Depth, Height and Relative altitude. The results showed that the Multi Layer Perceptron approach performed better than the Back propagation. The classification rates was 85.7% with the multi layer perceptron (MLP compared to 84.8% with back propagation (BP ANN.

  3. Neural Decoder for Topological Codes

    Science.gov (United States)

    Torlai, Giacomo; Melko, Roger G.

    2017-07-01

    We present an algorithm for error correction in topological codes that exploits modern machine learning techniques. Our decoder is constructed from a stochastic neural network called a Boltzmann machine, of the type extensively used in deep learning. We provide a general prescription for the training of the network and a decoding strategy that is applicable to a wide variety of stabilizer codes with very little specialization. We demonstrate the neural decoder numerically on the well-known two-dimensional toric code with phase-flip errors.

  4. Entropy Learning in Neural Network

    Directory of Open Access Journals (Sweden)

    Geok See Ng

    2017-12-01

    Full Text Available In this paper, entropy term is used in the learning phase of a neural network.  As learning progresses, more hidden nodes get into saturation.  The early creation of such hidden nodes may impair generalisation.  Hence entropy approach is proposed to dampen the early creation of such nodes.  The entropy learning also helps to increase the importance of relevant nodes while dampening the less important nodes.  At the end of learning, the less important nodes can then be eliminated to reduce the memory requirements of the neural network.

  5. The neural cell adhesion molecule

    DEFF Research Database (Denmark)

    Berezin, V; Bock, E; Poulsen, F M

    2000-01-01

    During the past year, the understanding of the structure and function of neural cell adhesion has advanced considerably. The three-dimensional structures of several of the individual modules of the neural cell adhesion molecule (NCAM) have been determined, as well as the structure of the complex...... between two identical fragments of the NCAM. Also during the past year, a link between homophilic cell adhesion and several signal transduction pathways has been proposed, connecting the event of cell surface adhesion to cellular responses such as neurite outgrowth. Finally, the stimulation of neurite...

  6. Antenna analysis using neural networks

    Science.gov (United States)

    Smith, William T.

    1992-01-01

    Conventional computing schemes have long been used to analyze problems in electromagnetics (EM). The vast majority of EM applications require computationally intensive algorithms involving numerical integration and solutions to large systems of equations. The feasibility of using neural network computing algorithms for antenna analysis is investigated. The ultimate goal is to use a trained neural network algorithm to reduce the computational demands of existing reflector surface error compensation techniques. Neural networks are computational algorithms based on neurobiological systems. Neural nets consist of massively parallel interconnected nonlinear computational elements. They are often employed in pattern recognition and image processing problems. Recently, neural network analysis has been applied in the electromagnetics area for the design of frequency selective surfaces and beam forming networks. The backpropagation training algorithm was employed to simulate classical antenna array synthesis techniques. The Woodward-Lawson (W-L) and Dolph-Chebyshev (D-C) array pattern synthesis techniques were used to train the neural network. The inputs to the network were samples of the desired synthesis pattern. The outputs are the array element excitations required to synthesize the desired pattern. Once trained, the network is used to simulate the W-L or D-C techniques. Various sector patterns and cosecant-type patterns (27 total) generated using W-L synthesis were used to train the network. Desired pattern samples were then fed to the neural network. The outputs of the network were the simulated W-L excitations. A 20 element linear array was used. There were 41 input pattern samples with 40 output excitations (20 real parts, 20 imaginary). A comparison between the simulated and actual W-L techniques is shown for a triangular-shaped pattern. Dolph-Chebyshev is a different class of synthesis technique in that D-C is used for side lobe control as opposed to pattern

  7. Arabic Handwriting Recognition Using Neural Network Classifier

    African Journals Online (AJOL)

    pc

    2018-03-05

    Mar 5, 2018 ... an OCR using Neural Network classifier preceded by a set of preprocessing .... Artificial Neural Networks (ANNs), which we adopt in this research, consist of ... advantage and disadvantages of each technique. In [9],. Khemiri ...

  8. Neural overlap in processing music and speech.

    Science.gov (United States)

    Peretz, Isabelle; Vuvan, Dominique; Lagrois, Marie-Élaine; Armony, Jorge L

    2015-03-19

    Neural overlap in processing music and speech, as measured by the co-activation of brain regions in neuroimaging studies, may suggest that parts of the neural circuitries established for language may have been recycled during evolution for musicality, or vice versa that musicality served as a springboard for language emergence. Such a perspective has important implications for several topics of general interest besides evolutionary origins. For instance, neural overlap is an important premise for the possibility of music training to influence language acquisition and literacy. However, neural overlap in processing music and speech does not entail sharing neural circuitries. Neural separability between music and speech may occur in overlapping brain regions. In this paper, we review the evidence and outline the issues faced in interpreting such neural data, and argue that converging evidence from several methodologies is needed before neural overlap is taken as evidence of sharing. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  9. Application of neural networks in coastal engineering

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.

    the neural network attractive. A neural network is an information processing system modeled on the structure of the dynamic process. It can solve the complex/nonlinear problems quickly once trained by operating on problems using an interconnected number...

  10. Ocean wave forecasting using recurrent neural networks

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Prabaharan, N.

    , merchant vessel routing, nearshore construction, etc. more efficiently and safely. This paper describes an artificial neural network, namely recurrent neural network with rprop update algorithm and is applied for wave forecasting. Measured ocean waves off...

  11. Neural overlap in processing music and speech

    Science.gov (United States)

    Peretz, Isabelle; Vuvan, Dominique; Lagrois, Marie-Élaine; Armony, Jorge L.

    2015-01-01

    Neural overlap in processing music and speech, as measured by the co-activation of brain regions in neuroimaging studies, may suggest that parts of the neural circuitries established for language may have been recycled during evolution for musicality, or vice versa that musicality served as a springboard for language emergence. Such a perspective has important implications for several topics of general interest besides evolutionary origins. For instance, neural overlap is an important premise for the possibility of music training to influence language acquisition and literacy. However, neural overlap in processing music and speech does not entail sharing neural circuitries. Neural separability between music and speech may occur in overlapping brain regions. In this paper, we review the evidence and outline the issues faced in interpreting such neural data, and argue that converging evidence from several methodologies is needed before neural overlap is taken as evidence of sharing. PMID:25646513

  12. Construction of Neural Networks for Realization of Localized Deep Learning

    Directory of Open Access Journals (Sweden)

    Charles K. Chui

    2018-05-01

    Full Text Available The subject of deep learning has recently attracted users of machine learning from various disciplines, including: medical diagnosis and bioinformatics, financial market analysis and online advertisement, speech and handwriting recognition, computer vision and natural language processing, time series forecasting, and search engines. However, theoretical development of deep learning is still at its infancy. The objective of this paper is to introduce a deep neural network (also called deep-net approach to localized manifold learning, with each hidden layer endowed with a specific learning task. For the purpose of illustrations, we only focus on deep-nets with three hidden layers, with the first layer for dimensionality reduction, the second layer for bias reduction, and the third layer for variance reduction. A feedback component is also designed to deal with outliers. The main theoretical result in this paper is the order O(m-2s/(2s+d of approximation of the regression function with regularity s, in terms of the number m of sample points, where the (unknown manifold dimension d replaces the dimension D of the sampling (Euclidean space for shallow nets.

  13. Finite-time convergent recurrent neural network with a hard-limiting activation function for constrained optimization with piecewise-linear objective functions.

    Science.gov (United States)

    Liu, Qingshan; Wang, Jun

    2011-04-01

    This paper presents a one-layer recurrent neural network for solving a class of constrained nonsmooth optimization problems with piecewise-linear objective functions. The proposed neural network is guaranteed to be globally convergent in finite time to the optimal solutions under a mild condition on a derived lower bound of a single gain parameter in the model. The number of neurons in the neural network is the same as the number of decision variables of the optimization problem. Compared with existing neural networks for optimization, the proposed neural network has a couple of salient features such as finite-time convergence and a low model complexity. Specific models for two important special cases, namely, linear programming and nonsmooth optimization, are also presented. In addition, applications to the shortest path problem and constrained least absolute deviation problem are discussed with simulation results to demonstrate the effectiveness and characteristics of the proposed neural network.

  14. MEMBRAIN NEURAL NETWORK FOR VISUAL PATTERN RECOGNITION

    Directory of Open Access Journals (Sweden)

    Artur Popko

    2013-06-01

    Full Text Available Recognition of visual patterns is one of significant applications of Artificial Neural Networks, which partially emulate human thinking in the domain of artificial intelligence. In the paper, a simplified neural approach to recognition of visual patterns is portrayed and discussed. This paper is dedicated for investigators in visual patterns recognition, Artificial Neural Networking and related disciplines. The document describes also MemBrain application environment as a powerful and easy to use neural networks’ editor and simulator supporting ANN.

  15. Neural network to diagnose lining condition

    Science.gov (United States)

    Yemelyanov, V. A.; Yemelyanova, N. Y.; Nedelkin, A. A.; Zarudnaya, M. V.

    2018-03-01

    The paper presents data on the problem of diagnosing the lining condition at the iron and steel works. The authors describe the neural network structure and software that are designed and developed to determine the lining burnout zones. The simulation results of the proposed neural networks are presented. The authors note the low learning and classification errors of the proposed neural networks. To realize the proposed neural network, the specialized software has been developed.

  16. SiNC: Saliency-injected neural codes for representation and efficient retrieval of medical radiographs.

    Directory of Open Access Journals (Sweden)

    Jamil Ahmad

    Full Text Available Medical image collections contain a wealth of information which can assist radiologists and medical experts in diagnosis and disease detection for making well-informed decisions. However, this objective can only be realized if efficient access is provided to semantically relevant cases from the ever-growing medical image repositories. In this paper, we present an efficient method for representing medical images by incorporating visual saliency and deep features obtained from a fine-tuned convolutional neural network (CNN pre-trained on natural images. Saliency detector is employed to automatically identify regions of interest like tumors, fractures, and calcified spots in images prior to feature extraction. Neuronal activation features termed as neural codes from different CNN layers are comprehensively studied to identify most appropriate features for representing radiographs. This study revealed that neural codes from the last fully connected layer of the fine-tuned CNN are found to be the most suitable for representing medical images. The neural codes extracted from the entire image and salient part of the image are fused to obtain the saliency-injected neural codes (SiNC descriptor which is used for indexing and retrieval. Finally, locality sensitive hashing techniques are applied on the SiNC descriptor to acquire short binary codes for allowing efficient retrieval in large scale image collections. Comprehensive experimental evaluations on the radiology images dataset reveal that the proposed framework achieves high retrieval accuracy and efficiency for scalable image retrieval applications and compares favorably with existing approaches.

  17. A new source difference artificial neural network for enhanced positioning accuracy

    International Nuclear Information System (INIS)

    Bhatt, Deepak; Aggarwal, Priyanka; Devabhaktuni, Vijay; Bhattacharya, Prabir

    2012-01-01

    Integrated inertial navigation system (INS) and global positioning system (GPS) units provide reliable navigation solution compared to standalone INS or GPS. Traditional Kalman filter-based INS/GPS integration schemes have several inadequacies related to sensor error model and immunity to noise. Alternatively, multi-layer perceptron (MLP) neural networks with three layers have been implemented to improve the position accuracy of the integrated system. However, MLP neural networks show poor accuracy for low-cost INS because of the large inherent sensor errors. For the first time the paper demonstrates the use of knowledge-based source difference artificial neural network (SDANN) to improve navigation performance of low-cost sensor, with or without external aiding sources. Unlike the conventional MLP or artificial neural networks (ANN), the structure of SDANN consists of two MLP neural networks called the coarse model and the difference model. The coarse model learns the input–output data relationship whereas the difference model adds knowledge to the system and fine-tunes the coarse model output by learning the associated training or estimation error. Our proposed SDANN model illustrated a significant improvement in navigation accuracy of up to 81% over conventional MLP. The results demonstrate that the proposed SDANN method is effective for GPS/INS integration schemes using low-cost inertial sensors, with and without GPS

  18. Study on pattern recognition of Raman spectrum based on fuzzy neural network

    Science.gov (United States)

    Zheng, Xiangxiang; Lv, Xiaoyi; Mo, Jiaqing

    2017-10-01

    Hydatid disease is a serious parasitic disease in many regions worldwide, especially in Xinjiang, China. Raman spectrum of the serum of patients with echinococcosis was selected as the research object in this paper. The Raman spectrum of blood samples from healthy people and patients with echinococcosis are measured, of which the spectrum characteristics are analyzed. The fuzzy neural network not only has the ability of fuzzy logic to deal with uncertain information, but also has the ability to store knowledge of neural network, so it is combined with the Raman spectrum on the disease diagnosis problem based on Raman spectrum. Firstly, principal component analysis (PCA) is used to extract the principal components of the Raman spectrum, reducing the network input and accelerating the prediction speed and accuracy of Network based on remaining the original data. Then, the information of the extracted principal component is used as the input of the neural network, the hidden layer of the network is the generation of rules and the inference process, and the output layer of the network is fuzzy classification output. Finally, a part of samples are randomly selected for the use of training network, then the trained network is used for predicting the rest of the samples, and the predicted results are compared with general BP neural network to illustrate the feasibility and advantages of fuzzy neural network. Success in this endeavor would be helpful for the research work of spectroscopic diagnosis of disease and it can be applied in practice in many other spectral analysis technique fields.

  19. Modelling and Forecasting Cruise Tourism Demand to İzmir by Different Artificial Neural Network Architectures

    Directory of Open Access Journals (Sweden)

    Murat Cuhadar

    2014-03-01

    Full Text Available Abstract Cruise ports emerged as an important sector for the economy of Turkey bordered on three sides by water. Forecasting cruise tourism demand ensures better planning, efficient preparation at the destination and it is the basis for elaboration of future plans. In the recent years, new techniques such as; artificial neural networks were employed for developing of the predictive models to estimate tourism demand. In this study, it is aimed to determine the forecasting method that provides the best performance when compared the forecast accuracy of Multi-layer Perceptron (MLP, Radial Basis Function (RBF and Generalized Regression neural network (GRNN to estimate the monthly inbound cruise tourism demand to İzmir via the method giving best results. We used the total number of foreign cruise tourist arrivals as a measure of inbound cruise tourism demand and monthly cruise tourist arrivals to İzmir Cruise Port in the period of January 2005 ‐December 2013 were utilized to appropriate model. Experimental results showed that radial basis function (RBF neural network outperforms multi-layer perceptron (MLP and the generalised regression neural networks (GRNN in terms of forecasting accuracy. By the means of the obtained RBF neural network model, it has been forecasted the monthly inbound cruise tourism demand to İzmir for the year 2014.

  20. Performance of Deep and Shallow Neural Networks, the Universal Approximation Theorem, Activity Cliffs, and QSAR.

    Science.gov (United States)

    Winkler, David A; Le, Tu C

    2017-01-01

    Neural networks have generated valuable Quantitative Structure-Activity/Property Relationships (QSAR/QSPR) models for a wide variety of small molecules and materials properties. They have grown in sophistication and many of their initial problems have been overcome by modern mathematical techniques. QSAR studies have almost always used so-called "shallow" neural networks in which there is a single hidden layer between the input and output layers. Recently, a new and potentially paradigm-shifting type of neural network based on Deep Learning has appeared. Deep learning methods have generated impressive improvements in image and voice recognition, and are now being applied to QSAR and QSAR modelling. This paper describes the differences in approach between deep and shallow neural networks, compares their abilities to predict the properties of test sets for 15 large drug data sets (the kaggle set), discusses the results in terms of the Universal Approximation theorem for neural networks, and describes how DNN may ameliorate or remove troublesome "activity cliffs" in QSAR data sets. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Simplified LQG Control with Neural Networks

    DEFF Research Database (Denmark)

    Sørensen, O.

    1997-01-01

    A new neural network application for non-linear state control is described. One neural network is modelled to form a Kalmann predictor and trained to act as an optimal state observer for a non-linear process. Another neural network is modelled to form a state controller and trained to produce...

  2. Analysis of neural networks through base functions

    NARCIS (Netherlands)

    van der Zwaag, B.J.; Slump, Cornelis H.; Spaanenburg, L.

    Problem statement. Despite their success-story, neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a "magic tool" but possibly even more

  3. Genetic Algorithm Optimized Neural Networks Ensemble as ...

    African Journals Online (AJOL)

    NJD

    Improvements in neural network calibration models by a novel approach using neural network ensemble (NNE) for the simultaneous ... process by training a number of neural networks. .... Matlab® version 6.1 was employed for building principal component ... provide a fair simulation of calibration data set with some degree.

  4. Recycling signals in the neural crest

    OpenAIRE

    Taneyhill, Lisa A.; Bronner-Fraser, Marianne E.

    2006-01-01

    Vertebrate neural crest cells are multipotent and differentiate into structures that include cartilage and the bones of the face, as well as much of the peripheral nervous system. Understanding how different model vertebrates utilize signaling pathways reiteratively during various stages of neural crest formation and differentiation lends insight into human disorders associated with the neural crest.

  5. Recycling signals in the neural crest.

    Science.gov (United States)

    Taneyhill, Lisa A; Bronner-Fraser, Marianne

    2005-01-01

    Vertebrate neural crest cells are multipotent and differentiate into structures that include cartilage and the bones of the face, as well as much of the peripheral nervous system. Understanding how different model vertebrates utilize signaling pathways reiteratively during various stages of neural crest formation and differentiation lends insight into human disorders associated with the neural crest.

  6. Stability of mixing layers

    Science.gov (United States)

    Tam, Christopher; Krothapalli, A

    1993-01-01

    The research program for the first year of this project (see the original research proposal) consists of developing an explicit marching scheme for solving the parabolized stability equations (PSE). Performing mathematical analysis of the computational algorithm including numerical stability analysis and the determination of the proper boundary conditions needed at the boundary of the computation domain are implicit in the task. Before one can solve the parabolized stability equations for high-speed mixing layers, the mean flow must first be found. In the past, instability analysis of high-speed mixing layer has mostly been performed on mean flow profiles calculated by the boundary layer equations. In carrying out this project, it is believed that the boundary layer equations might not give an accurate enough nonparallel, nonlinear mean flow needed for parabolized stability analysis. A more accurate mean flow can, however, be found by solving the parabolized Navier-Stokes equations. The advantage of the parabolized Navier-Stokes equations is that its accuracy is consistent with the PSE method. Furthermore, the method of solution is similar. Hence, the major part of the effort of the work of this year has been devoted to the development of an explicit numerical marching scheme for the solution of the Parabolized Navier-Stokes equation as applied to the high-seed mixing layer problem.

  7. Neural chips, neural computers and application in high and superhigh energy physics experiments

    International Nuclear Information System (INIS)

    Nikityuk, N.M.; )

    2001-01-01

    Architecture peculiarity and characteristics of series of neural chips and neural computes used in scientific instruments are considered. Tendency of development and use of them in high energy and superhigh energy physics experiments are described. Comparative data which characterize the efficient use of neural chips for useful event selection, classification elementary particles, reconstruction of tracks of charged particles and for search of hypothesis Higgs particles are given. The characteristics of native neural chips and accelerated neural boards are considered [ru

  8. Medical Imaging with Neural Networks

    International Nuclear Information System (INIS)

    Pattichis, C.; Cnstantinides, A.

    1994-01-01

    The objective of this paper is to provide an overview of the recent developments in the use of artificial neural networks in medical imaging. The areas of medical imaging that are covered include : ultrasound, magnetic resonance, nuclear medicine and radiological (including computerized tomography). (authors)

  9. Optoelectronic Implementation of Neural Networks

    Indian Academy of Sciences (India)

    neural networks, such as learning, adapting and copying by means of parallel ... to provide robust recognition of hand-printed English text. Engine idle and misfiring .... and s represents the bounded activation function of a neuron. It is typically ...

  10. Aphasia Classification Using Neural Networks

    DEFF Research Database (Denmark)

    Axer, H.; Jantzen, Jan; Berks, G.

    2000-01-01

    A web-based software model (http://fuzzy.iau.dtu.dk/aphasia.nsf) was developed as an example for classification of aphasia using neural networks. Two multilayer perceptrons were used to classify the type of aphasia (Broca, Wernicke, anomic, global) according to the results in some subtests...

  11. Intelligent neural network diagnostic system

    International Nuclear Information System (INIS)

    Mohamed, A.H.

    2010-01-01

    Recently, artificial neural network (ANN) has made a significant mark in the domain of diagnostic applications. Neural networks are used to implement complex non-linear mappings (functions) using simple elementary units interrelated through connections with adaptive weights. The performance of the ANN is mainly depending on their topology structure and weights. Some systems have been developed using genetic algorithm (GA) to optimize the topology of the ANN. But, they suffer from some limitations. They are : (1) The computation time requires for training the ANN several time reaching for the average weight required, (2) Slowness of GA for optimization process and (3) Fitness noise appeared in the optimization of ANN. This research suggests new issues to overcome these limitations for finding optimal neural network architectures to learn particular problems. This proposed methodology is used to develop a diagnostic neural network system. It has been applied for a 600 MW turbo-generator as a case of real complex systems. The proposed system has proved its significant performance compared to two common methods used in the diagnostic applications.

  12. Medical Imaging with Neural Networks

    Energy Technology Data Exchange (ETDEWEB)

    Pattichis, C [Department of Computer Science, University of Cyprus, Kallipoleos 75, P.O.Box 537, Nicosia (Cyprus); Cnstantinides, A [Department of Electrical Engineering, Imperial College of Science, Technology and Medicine, London SW7 2BT (United Kingdom)

    1994-12-31

    The objective of this paper is to provide an overview of the recent developments in the use of artificial neural networks in medical imaging. The areas of medical imaging that are covered include : ultrasound, magnetic resonance, nuclear medicine and radiological (including computerized tomography). (authors). 61 refs, 4 tabs.

  13. Numerical experiments with neural networks

    International Nuclear Information System (INIS)

    Miranda, Enrique.

    1990-01-01

    Neural networks are highly idealized models which, in spite of their simplicity, reproduce some key features of the real brain. In this paper, they are introduced at a level adequate for an undergraduate computational physics course. Some relevant magnitudes are defined and evaluated numerically for the Hopfield model and a short term memory model. (Author)

  14. Serotonin, neural markers and memory

    Directory of Open Access Journals (Sweden)

    Alfredo eMeneses

    2015-07-01

    Full Text Available Diverse neuropsychiatric disorders present dysfunctional memory and no effective treatment exits for them; likely as result of the absence of neural markers associated to memory. Neurotransmitter systems and signaling pathways have been implicated in memory and dysfunctional memory; however, their role is poorly understood. Hence, neural markers and cerebral functions and dysfunctions are revised. To our knowledge no previous systematic works have been published addressing these issues. The interactions among behavioral tasks, control groups and molecular changes and/or pharmacological effects are mentioned. Neurotransmitter receptors and signaling pathways, during normal and abnormally functioning memory with an emphasis on the behavioral aspects of memory are revised. With focus on serotonin, since as it is a well characterized neurotransmitter, with multiple pharmacological tools, and well characterized downstream signaling in mammals’ species. 5-HT1A, 5-HT4, 5-HT5, 5-HT6 and 5-HT7 receptors as well as SERT (serotonin transporter seem to be useful neural markers and/or therapeutic targets. Certainly, if the mentioned evidence is replicated, then the translatability from preclinical and clinical studies to neural changes might be confirmed. Hypothesis and theories might provide appropriate limits and perspectives of evidence

  15. Neural correlates of viewing paintings

    DEFF Research Database (Denmark)

    Vartanian, Oshin; Skov, Martin

    2014-01-01

    Many studies involving functional magnetic resonance imaging (fMRI) have exposed participants to paintings under varying task demands. To isolate neural systems that are activated reliably across fMRI studies in response to viewing paintings regardless of variation in task demands, a quantitative...

  16. Neural Basis of Visual Distraction

    Science.gov (United States)

    Kim, So-Yeon; Hopfinger, Joseph B.

    2010-01-01

    The ability to maintain focus and avoid distraction by goal-irrelevant stimuli is critical for performing many tasks and may be a key deficit in attention-related problems. Recent studies have demonstrated that irrelevant stimuli that are consciously perceived may be filtered out on a neural level and not cause the distraction triggered by…

  17. Vestibular hearing and neural synchronization.

    Science.gov (United States)

    Emami, Seyede Faranak; Daneshi, Ahmad

    2012-01-01

    Objectives. Vestibular hearing as an auditory sensitivity of the saccule in the human ear is revealed by cervical vestibular evoked myogenic potentials (cVEMPs). The range of the vestibular hearing lies in the low frequency. Also, the amplitude of an auditory brainstem response component depends on the amount of synchronized neural activity, and the auditory nerve fibers' responses have the best synchronization with the low frequency. Thus, the aim of this study was to investigate correlation between vestibular hearing using cVEMPs and neural synchronization via slow wave Auditory Brainstem Responses (sABR). Study Design. This case-control survey was consisted of twenty-two dizzy patients, compared to twenty healthy controls. Methods. Intervention comprised of Pure Tone Audiometry (PTA), Impedance acoustic metry (IA), Videonystagmography (VNG), fast wave ABR (fABR), sABR, and cVEMPs. Results. The affected ears of the dizzy patients had the abnormal findings of cVEMPs (insecure vestibular hearing) and the abnormal findings of sABR (decreased neural synchronization). Comparison of the cVEMPs at affected ears versus unaffected ears and the normal persons revealed significant differences (P < 0.05). Conclusion. Safe vestibular hearing was effective in the improvement of the neural synchronization.

  18. Spin glasses and neural networks

    International Nuclear Information System (INIS)

    Parga, N.; Universidad Nacional de Cuyo, San Carlos de Bariloche

    1989-01-01

    The mean-field theory of spin glass models has been used as a prototype of systems with frustration and disorder. One of the most interesting related systems are models of associative memories. In these lectures we review the main concepts developed to solve the Sherrington-Kirkpatrick model and its application to neural networks. (orig.)

  19. Training strategy for convolutional neural networks in pedestrian gender classification

    Science.gov (United States)

    Ng, Choon-Boon; Tay, Yong-Haur; Goi, Bok-Min

    2017-06-01

    In this work, we studied a strategy for training a convolutional neural network in pedestrian gender classification with limited amount of labeled training data. Unsupervised learning by k-means clustering on pedestrian images was used to learn the filters to initialize the first layer of the network. As a form of pre-training, supervised learning for the related task of pedestrian classification was performed. Finally, the network was fine-tuned for gender classification. We found that this strategy improved the network's generalization ability in gender classification, achieving better test results when compared to random weights initialization and slightly more beneficial than merely initializing the first layer filters by unsupervised learning. This shows that unsupervised learning followed by pre-training with pedestrian images is an effective strategy to learn useful features for pedestrian gender classification.

  20. A Gamma Memory Neural Network for System Identification

    Science.gov (United States)

    Motter, Mark A.; Principe, Jose C.

    1992-01-01

    A gamma neural network topology is investigated for a system identification application. A discrete gamma memory structure is used in the input layer, providing delayed values of both the control inputs and the network output to the input layer. The discrete gamma memory structure implements a tapped dispersive delay line, with the amount of dispersion regulated by a single, adaptable parameter. The network is trained using static back propagation, but captures significant features of the system dynamics. The system dynamics identified with the network are the Mach number dynamics of the 16 Foot Transonic Tunnel at NASA Langley Research Center, Hampton, Virginia. The training data spans an operating range of Mach numbers from 0.4 to 1.3.

  1. Volumetric multimodality neural network for brain tumor segmentation

    Science.gov (United States)

    Silvana Castillo, Laura; Alexandra Daza, Laura; Carlos Rivera, Luis; Arbeláez, Pablo

    2017-11-01

    Brain lesion segmentation is one of the hardest tasks to be solved in computer vision with an emphasis on the medical field. We present a convolutional neural network that produces a semantic segmentation of brain tumors, capable of processing volumetric data along with information from multiple MRI modalities at the same time. This results in the ability to learn from small training datasets and highly imbalanced data. Our method is based on DeepMedic, the state of the art in brain lesion segmentation. We develop a new architecture with more convolutional layers, organized in three parallel pathways with different input resolution, and additional fully connected layers. We tested our method over the 2015 BraTS Challenge dataset, reaching an average dice coefficient of 84%, while the standard DeepMedic implementation reached 74%.

  2. Single image super-resolution based on convolutional neural networks

    Science.gov (United States)

    Zou, Lamei; Luo, Ming; Yang, Weidong; Li, Peng; Jin, Liujia

    2018-03-01

    We present a deep learning method for single image super-resolution (SISR). The proposed approach learns end-to-end mapping between low-resolution (LR) images and high-resolution (HR) images. The mapping is represented as a deep convolutional neural network which inputs the LR image and outputs the HR image. Our network uses 5 convolution layers, which kernels size include 5×5, 3×3 and 1×1. In our proposed network, we use residual-learning and combine different sizes of convolution kernels at the same layer. The experiment results show that our proposed method performs better than the existing methods in reconstructing quality index and human visual effects on benchmarked images.

  3. Three-layer magnetoconvection

    International Nuclear Information System (INIS)

    Lin, M.-K.; Silvers, L.J.; Proctor, M.R.E.

    2008-01-01

    It is believed that some stars have two or more convection zones in close proximity near to the stellar photosphere. These zones are separated by convectively stable regions that are relatively narrow. Due to the close proximity of these regions it is important to construct mathematical models to understand the transport and mixing of passive and dynamic quantities. One key quantity of interest is a magnetic field, a dynamic vector quantity, that can drastically alter the convectively driven flows, and have an important role in coupling the different layers. In this Letter we present the first investigation into the effect of an imposed magnetic field in such a geometry. We focus our attention on the effect of field strength and show that, while there are some similarities with results for magnetic field evolution in a single layer, new and interesting phenomena are also present in a three layer system

  4. Layered tin dioxide microrods

    International Nuclear Information System (INIS)

    Duan Junhong; Huang Hongbo; Gong Jiangfeng; Zhao Xiaoning; Cheng Guangxu; Yang Shaoguang

    2007-01-01

    Single-crystalline layered SnO 2 microrods were synthesized by a simple tin-water reaction at 900 deg. C. The structural and optical properties of the sample were characterized by x-ray powder diffraction, energy-dispersive x-ray spectroscopy, scanning electron microscopy, high resolution transmission electron microscopy, Raman scattering and photoluminescence (PL) spectroscopy. High resolution transmission electron microscopy studies and selected area electron diffraction patterns revealed that the layered SnO 2 microrods are single crystalline and their growth direction is along [1 1 0]. The growth mechanism of the microrods was proposed based on SEM, TEM characterization and thermodynamic analysis. It is deduced that the layered microrods grow by the stacking of SnO 2 sheets with a (1 1 0) surface in a vapour-liquid-solid process. Three emission peaks at 523, 569 and 626 nm were detected in room-temperature PL measurements

  5. Superfluid Boundary Layer.

    Science.gov (United States)

    Stagg, G W; Parker, N G; Barenghi, C F

    2017-03-31

    We model the superfluid flow of liquid helium over the rough surface of a wire (used to experimentally generate turbulence) profiled by atomic force microscopy. Numerical simulations of the Gross-Pitaevskii equation reveal that the sharpest features in the surface induce vortex nucleation both intrinsically (due to the raised local fluid velocity) and extrinsically (providing pinning sites to vortex lines aligned with the flow). Vortex interactions and reconnections contribute to form a dense turbulent layer of vortices with a nonclassical average velocity profile which continually sheds small vortex rings into the bulk. We characterize this layer for various imposed flows. As boundary layers conventionally arise from viscous forces, this result opens up new insight into the nature of superflows.

  6. Non-invasive neural stimulation

    Science.gov (United States)

    Tyler, William J.; Sanguinetti, Joseph L.; Fini, Maria; Hool, Nicholas

    2017-05-01

    Neurotechnologies for non-invasively interfacing with neural circuits have been evolving from those capable of sensing neural activity to those capable of restoring and enhancing human brain function. Generally referred to as non-invasive neural stimulation (NINS) methods, these neuromodulation approaches rely on electrical, magnetic, photonic, and acoustic or ultrasonic energy to influence nervous system activity, brain function, and behavior. Evidence that has been surmounting for decades shows that advanced neural engineering of NINS technologies will indeed transform the way humans treat diseases, interact with information, communicate, and learn. The physics underlying the ability of various NINS methods to modulate nervous system activity can be quite different from one another depending on the energy modality used as we briefly discuss. For members of commercial and defense industry sectors that have not traditionally engaged in neuroscience research and development, the science, engineering and technology required to advance NINS methods beyond the state-of-the-art presents tremendous opportunities. Within the past few years alone there have been large increases in global investments made by federal agencies, foundations, private investors and multinational corporations to develop advanced applications of NINS technologies. Driven by these efforts NINS methods and devices have recently been introduced to mass markets via the consumer electronics industry. Further, NINS continues to be explored in a growing number of defense applications focused on enhancing human dimensions. The present paper provides a brief introduction to the field of non-invasive neural stimulation by highlighting some of the more common methods in use or under current development today.

  7. Neural networks and applications tutorial

    Science.gov (United States)

    Guyon, I.

    1991-09-01

    The importance of neural networks has grown dramatically during this decade. While only a few years ago they were primarily of academic interest, now dozens of companies and many universities are investigating the potential use of these systems and products are beginning to appear. The idea of building a machine whose architecture is inspired by that of the brain has roots which go far back in history. Nowadays, technological advances of computers and the availability of custom integrated circuits, permit simulations of hundreds or even thousands of neurons. In conjunction, the growing interest in learning machines, non-linear dynamics and parallel computation spurred renewed attention in artificial neural networks. Many tentative applications have been proposed, including decision systems (associative memories, classifiers, data compressors and optimizers), or parametric models for signal processing purposes (system identification, automatic control, noise canceling, etc.). While they do not always outperform standard methods, neural network approaches are already used in some real world applications for pattern recognition and signal processing tasks. The tutorial is divided into six lectures, that where presented at the Third Graduate Summer Course on Computational Physics (September 3-7, 1990) on Parallel Architectures and Applications, organized by the European Physical Society: (1) Introduction: machine learning and biological computation. (2) Adaptive artificial neurons (perceptron, ADALINE, sigmoid units, etc.): learning rules and implementations. (3) Neural network systems: architectures, learning algorithms. (4) Applications: pattern recognition, signal processing, etc. (5) Elements of learning theory: how to build networks which generalize. (6) A case study: a neural network for on-line recognition of handwritten alphanumeric characters.

  8. Hierarchical Neural Representation of Dreamed Objects Revealed by Brain Decoding with Deep Neural Network Features.

    Science.gov (United States)

    Horikawa, Tomoyasu; Kamitani, Yukiyasu

    2017-01-01

    Dreaming is generally thought to be generated by spontaneous brain activity during sleep with patterns common to waking experience. This view is supported by a recent study demonstrating that dreamed objects can be predicted from brain activity during sleep using statistical decoders trained with stimulus-induced brain activity. However, it remains unclear whether and how visual image features associated with dreamed objects are represented in the brain. In this study, we used a deep neural network (DNN) model for object recognition as a proxy for hierarchical visual feature representation, and DNN features for dreamed objects were analyzed with brain decoding of fMRI data collected during dreaming. The decoders were first trained with stimulus-induced brain activity labeled with the feature values of the stimulus image from multiple DNN layers. The decoders were then used to decode DNN features from the dream fMRI data, and the decoded features were compared with the averaged features of each object category calculated from a large-scale image database. We found that the feature values decoded from the dream fMRI data positively correlated with those associated with dreamed object categories at mid- to high-level DNN layers. Using the decoded features, the dreamed object category could be identified at above-chance levels by matching them to the averaged features for candidate categories. The results suggest that dreaming recruits hierarchical visual feature representations associated with objects, which may support phenomenal aspects of dream experience.

  9. Classification of remotely sensed data using OCR-inspired neural network techniques. [Optical Character Recognition

    Science.gov (United States)

    Kiang, Richard K.

    1992-01-01

    Neural networks have been applied to classifications of remotely sensed data with some success. To improve the performance of this approach, an examination was made of how neural networks are applied to the optical character recognition (OCR) of handwritten digits and letters. A three-layer, feedforward network, along with techniques adopted from OCR, was used to classify Landsat-4 Thematic Mapper data. Good results were obtained. To overcome the difficulties that are characteristic of remote sensing applications and to attain significant improvements in classification accuracy, a special network architecture may be required.

  10. Optical waveguides with memory effect using photochromic material for neural network

    Science.gov (United States)

    Tanimoto, Keisuke; Amemiya, Yoshiteru; Yokoyama, Shin

    2018-04-01

    An optical neural network using a waveguide with a memory effect, a photodiode, CMOS circuits and LEDs was proposed. To realize the neural network, optical waveguides with a memory effect were fabricated using a cladding layer containing the photochromic material “diarylethene”. The transmittance of green light was decreased by UV light irradiation and recovered by the passage of green light through the waveguide. It was confirmed that the transmittance versus total energy of the green light that passed through the waveguide well fit the universal exponential curve.

  11. The networks scale and coupling parameter in synchronization of neural networks with diluted synapses

    International Nuclear Information System (INIS)

    Li Yanlong; Ma Jun; Chen Yuhong; Xu Wenke; Wang Yinghai

    2008-01-01

    In this paper the influence of the networks scale on the coupling parameter in the synchronization of neural networks with diluted synapses is investigated. Using numerical simulations, an exponential decay form is observed in the extreme case of global coupling among networks and full connection in each network; the larger linked degree becomes, the larger critical coupling intensity becomes; and the oscillation phenomena in the relationship of critical coupling intensity and the number of neural networks layers in the case of small-scale networks are found

  12. Foreground removal from CMB temperature maps using an MLP neural network

    DEFF Research Database (Denmark)

    Nørgaard-Nielsen, Hans Ulrik; Jørgensen, H.E.

    2008-01-01

    the CMB temperature signal from the combined signal CMB and the foregrounds has been investigated. As a specific example, we have analysed simulated data, as expected from the ESA Planck CMB mission. A simple multilayer perceptron neural network with 2 hidden layers can provide temperature estimates over...... CMB signal it is essential to minimize the systematic errors in the CMB temperature determinations. Following the available knowledge of the spectral behavior of the Galactic foregrounds simple power law-like spectra have been assumed. The feasibility of using a simple neural network for extracting...

  13. Layered semiconductor neutron detectors

    Science.gov (United States)

    Mao, Samuel S; Perry, Dale L

    2013-12-10

    Room temperature operating solid state hand held neutron detectors integrate one or more relatively thin layers of a high neutron interaction cross-section element or materials with semiconductor detectors. The high neutron interaction cross-section element (e.g., Gd, B or Li) or materials comprising at least one high neutron interaction cross-section element can be in the form of unstructured layers or micro- or nano-structured arrays. Such architecture provides high efficiency neutron detector devices by capturing substantially more carriers produced from high energy .alpha.-particles or .gamma.-photons generated by neutron interaction.

  14. A neural network device for on-line particle identification in cosmic ray experiments

    International Nuclear Information System (INIS)

    Scrimaglio, R.; Finetti, N.; D'Altorio, L.; Rantucci, E.; Raso, M.; Segreto, E.; Tassoni, A.; Cardarilli, G.C.

    2004-01-01

    On-line particle identification is one of the main goals of many experiments in space both for rare event studies and for optimizing measurements along the orbital trajectory. Neural networks can be a useful tool for signal processing and real time data analysis in such experiments. In this document we report on the performances of a programmable neural device which was developed in VLSI analog/digital technology. Neurons and synapses were accomplished by making use of Operational Transconductance Amplifier (OTA) structures. In this paper we report on the results of measurements performed in order to verify the agreement of the characteristic curves of each elementary cell with simulations and on the device performances obtained by implementing simple neural structures on the VLSI chip. A feed-forward neural network (Multi-Layer Perceptron, MLP) was implemented on the VLSI chip and trained to identify particles by processing the signals of two-dimensional position-sensitive Si detectors. The radiation monitoring device consisted of three double-sided silicon strip detectors. From the analysis of a set of simulated data it was found that the MLP implemented on the neural device gave results comparable with those obtained with the standard method of analysis confirming that the implemented neural network could be employed for real time particle identification

  15. Adaptive Control of Nonlinear Discrete-Time Systems by Using OS-ELM Neural Networks

    Directory of Open Access Journals (Sweden)

    Xiao-Li Li

    2014-01-01

    Full Text Available As a kind of novel feedforward neural network with single hidden layer, ELM (extreme learning machine neural networks are studied for the identification and control of nonlinear dynamic systems. The property of simple structure and fast convergence of ELM can be shown clearly. In this paper, we are interested in adaptive control of nonlinear dynamic plants by using OS-ELM (online sequential extreme learning machine neural networks. Based on data scope division, the problem that training process of ELM neural network is sensitive to the initial training data is also solved. According to the output range of the controlled plant, the data corresponding to this range will be used to initialize ELM. Furthermore, due to the drawback of conventional adaptive control, when the OS-ELM neural network is used for adaptive control of the system with jumping parameters, the topological structure of the neural network can be adjusted dynamically by using multiple model switching strategy, and an MMAC (multiple model adaptive control will be used to improve the control performance. Simulation results are included to complement the theoretical results.

  16. Parameter diagnostics of phases and phase transition learning by neural networks

    Science.gov (United States)

    Suchsland, Philippe; Wessel, Stefan

    2018-05-01

    We present an analysis of neural network-based machine learning schemes for phases and phase transitions in theoretical condensed matter research, focusing on neural networks with a single hidden layer. Such shallow neural networks were previously found to be efficient in classifying phases and locating phase transitions of various basic model systems. In order to rationalize the emergence of the classification process and for identifying any underlying physical quantities, it is feasible to examine the weight matrices and the convolutional filter kernels that result from the learning process of such shallow networks. Furthermore, we demonstrate how the learning-by-confusing scheme can be used, in combination with a simple threshold-value classification method, to diagnose the learning parameters of neural networks. In particular, we study the classification process of both fully-connected and convolutional neural networks for the two-dimensional Ising model with extended domain wall configurations included in the low-temperature regime. Moreover, we consider the two-dimensional XY model and contrast the performance of the learning-by-confusing scheme and convolutional neural networks trained on bare spin configurations to the case of preprocessed samples with respect to vortex configurations. We discuss these findings in relation to similar recent investigations and possible further applications.

  17. Differentiation of neurons from neural precursors generated in floating spheres from embryonic stem cells

    Directory of Open Access Journals (Sweden)

    Forrester Jeff

    2009-09-01

    Full Text Available Abstract Background Neural differentiation of embryonic stem (ES cells is usually achieved by induction of ectoderm in embryoid bodies followed by the enrichment of neuronal progenitors using a variety of factors. Obtaining reproducible percentages of neural cells is difficult and the methods are time consuming. Results Neural progenitors were produced from murine ES cells by a combination of nonadherent conditions and serum starvation. Conversion to neural progenitors was accompanied by downregulation of Oct4 and NANOG and increased expression of nestin. ES cells containing a GFP gene under the control of the Sox1 regulatory regions became fluorescent upon differentiation to neural progenitors, and ES cells with a tau-GFP fusion protein became fluorescent upon further differentiation to neurons. Neurons produced from these cells upregulated mature neuronal markers, or differentiated to glial and oligodendrocyte fates. The neurons gave rise to action potentials that could be recorded after application of fixed currents. Conclusion Neural progenitors were produced from murine ES cells by a novel method that induced neuroectoderm cells by a combination of nonadherent conditions and serum starvation, in contrast to the embryoid body method in which neuroectoderm cells must be selected after formation of all three germ layers.

  18. Spatial frequency domain spectroscopy of two layer media

    Science.gov (United States)

    Yudovsky, Dmitry; Durkin, Anthony J.

    2011-10-01

    Monitoring of tissue blood volume and oxygen saturation using biomedical optics techniques has the potential to inform the assessment of tissue health, healing, and dysfunction. These quantities are typically estimated from the contribution of oxyhemoglobin and deoxyhemoglobin to the absorption spectrum of the dermis. However, estimation of blood related absorption in superficial tissue such as the skin can be confounded by the strong absorption of melanin in the epidermis. Furthermore, epidermal thickness and pigmentation varies with anatomic location, race, gender, and degree of disease progression. This study describes a technique for decoupling the effect of melanin absorption in the epidermis from blood absorption in the dermis for a large range of skin types and thicknesses. An artificial neural network was used to map input optical properties to spatial frequency domain diffuse reflectance of two layer media. Then, iterative fitting was used to determine the optical properties from simulated spatial frequency domain diffuse reflectance. Additionally, an artificial neural network was trained to directly map spatial frequency domain reflectance to sets of optical properties of a two layer medium, thus bypassing the need for iteration. In both cases, the optical thickness of the epidermis and absorption and reduced scattering coefficients of the dermis were determined independently. The accuracy and efficiency of the iterative fitting approach was compared with the direct neural network inversion.

  19. Evolving RBF neural networks for adaptive soft-sensor design.

    Science.gov (United States)

    Alexandridis, Alex

    2013-12-01

    This work presents an adaptive framework for building soft-sensors based on radial basis function (RBF) neural network models. The adaptive fuzzy means algorithm is utilized in order to evolve an RBF network, which approximates the unknown system based on input-output data from it. The methodology gradually builds the RBF network model, based on two separate levels of adaptation: On the first level, the structure of the hidden layer is modified by adding or deleting RBF centers, while on the second level, the synaptic weights are adjusted with the recursive least squares with exponential forgetting algorithm. The proposed approach is tested on two different systems, namely a simulated nonlinear DC Motor and a real industrial reactor. The results show that the produced soft-sensors can be successfully applied to model the two nonlinear systems. A comparison with two different adaptive modeling techniques, namely a dynamic evolving neural-fuzzy inference system (DENFIS) and neural networks trained with online backpropagation, highlights the advantages of the proposed methodology.

  20. Advanced approach to numerical forecasting using artificial neural networks

    Directory of Open Access Journals (Sweden)

    Michael Štencl

    2009-01-01

    Full Text Available Current global market is driven by many factors, such as the information age, the time and amount of information distributed by many data channels it is practically impossible analyze all kinds of incoming information flows and transform them to data with classical methods. New requirements could be met by using other methods. Once trained on patterns artificial neural networks can be used for forecasting and they are able to work with extremely big data sets in reasonable time. The patterns used for learning process are samples of past data. This paper uses Radial Basis Functions neural network in comparison with Multi Layer Perceptron network with Back-propagation learning algorithm on prediction task. The task works with simplified numerical time series and includes forty observations with prediction for next five observations. The main topic of the article is the identification of the main differences between used neural networks architectures together with numerical forecasting. Detected differences then verify on practical comparative example.