WorldWideScience

Sample records for two-hidden layer neural

  1. Double hidden layer RBF process neural network based online prediction of steam turbine exhaust enthalpy

    Institute of Scientific and Technical Information of China (English)

    GONG Huanchun

    2014-01-01

    In order to diagnose the unit economic performance online,the radial basis function (RBF) process neural network with two hidden layers was introduced to online prediction of steam turbine exhaust enthalpy.Thus,the model reflecting complicated relationship between the steam turbine exhaust enthalpy and the relative operation parameters was established.Moreover,the enthalpy of final stage extraction steam and exhaust from a 300 MW unit turbine was taken as the example to perform the online calculation. The results show that,the average relative error of this method is less than 1%,so the accuracy of this al-gorithm is higher than that of the BP neutral network.Furthermore,this method has advantages of high convergence rate,simple structure and high accuracy.

  2. Sparse coding for layered neural networks

    Science.gov (United States)

    Katayama, Katsuki; Sakata, Yasuo; Horiguchi, Tsuyoshi

    2002-07-01

    We investigate storage capacity of two types of fully connected layered neural networks with sparse coding when binary patterns are embedded into the networks by a Hebbian learning rule. One of them is a layered network, in which a transfer function of even layers is different from that of odd layers. The other is a layered network with intra-layer connections, in which the transfer function of inter-layer is different from that of intra-layer, and inter-layered neurons and intra-layered neurons are updated alternately. We derive recursion relations for order parameters by means of the signal-to-noise ratio method, and then apply the self-control threshold method proposed by Dominguez and Bollé to both layered networks with monotonic transfer functions. We find that a critical value αC of storage capacity is about 0.11|a ln a| -1 ( a≪1) for both layered networks, where a is a neuronal activity. It turns out that the basin of attraction is larger for both layered networks when the self-control threshold method is applied.

  3. Chaotic behavior of a layered neural network

    Energy Technology Data Exchange (ETDEWEB)

    Derrida, B.; Meir, R.

    1988-09-15

    We consider the evolution of configurations in a layered feed-forward neural network. Exact expressions for the evolution of the distance between two configurations are obtained in the thermodynamic limit. Our results show that the distance between two arbitrarily close configurations always increases, implying chaotic behavior, even in the phase of good retrieval.

  4. Neural crest: The fourth germ layer

    Directory of Open Access Journals (Sweden)

    K Shyamala

    2015-01-01

    Full Text Available The neural crest cells (NCCs, a transient group of cells that emerges from the dorsal aspect of the neural tube during early vertebrate development has been a fascinating group of cells because of its multipotency, long range migration through embryo and its capacity to generate a prodigious number of differentiated cell types. For these reasons, although derived from the ectoderm, the neural crest (NC has been called the fourth germ layer. The non neural ectoderm, the neural plate and the underlying mesoderm are needed for the induction and formation of NC cells. Once formed, NC cells start migrating as a wave of cells, moving away from the neuroepithelium and quickly splitting into distinct streams. These migrating NCCs home in to different regions and give rise to plethora of tissues. Umpteen number of signaling molecules are essential for formation, epithelial mesenchymal transition, delamination, migration and localization of NCC. Authors believe that a clear understanding of steps and signals involved in NC formation, migration, etc., may help in understanding the pathogenesis behind cancer metastasis and many other diseases. Hence, we have taken this review to discuss the various aspects of the NC cells.

  5. Handwritten Multiscript Pin Code Recognition System having Multiple hidden layers using Back Propagation Neural Network

    Directory of Open Access Journals (Sweden)

    Stuti Asthana Rakesh K Bhujade Niresh Sharma Rajdeep Singh

    2011-10-01

    Full Text Available India is a country where multiple languages are spoken depending upon the place where people live as well as their mother tongue. For example ,a person living in tamilnadu is more familiar in writing/speaking Tamil as compared to Hindi or any other language. Usually due to this problem , sometimes people write pincode in the combination of two different numeric scripting, mainly the local language (which depend on the location or the mother tongue and the official language (which may be national language hindi or international language english.In this paper, we are concentrating on this problem of recognizing multiscript number recognition using artificial neural network on postcard, keeping accuracy as a chief criteria. This work has been tested on five different popular Indian scripts namely Hindi, Urdu, Tamil ,English and Telugu. Experiments were performed on samples by using two hidden layers having 250 neurons each and the results revealed that with the use of proper combination of number of neurons and number of layers in the neural network, accuracy upto 96% can be achieved under ideal condition.

  6. Layered learning of soccer robot based on artificial neural network

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Discusses the application of artificial neural network for MIROSOT, introduces a layered model of BP network of soccer robot for learning basic behavior and cooperative behavior, and concludes from experimental results that the model is effective.

  7. Learning Processes of Layered Neural Networks

    OpenAIRE

    Fujiki, Sumiyoshi; FUJIKI, Nahomi, M.

    1995-01-01

    A positive reinforcement type learning algorithm is formulated for a stochastic feed-forward neural network, and a learning equation similar to that of the Boltzmann machine algorithm is obtained. By applying a mean field approximation to the same stochastic feed-forward neural network, a deterministic analog feed-forward network is obtained and the back-propagation learning rule is re-derived.

  8. Layered neural networks with non-monotonic transfer functions

    Science.gov (United States)

    Katayama, Katsuki; Sakata, Yasuo; Horiguchi, Tsuyoshi

    2003-01-01

    We investigate storage capacity and generalization ability for two types of fully connected layered neural networks with non-monotonic transfer functions; random patterns are embedded into the networks by a Hebbian learning rule. One of them is a layered network in which a non-monotonic transfer function of even layers is different from that of odd layers. The other is a layered network with intra-layer connections, in which the non-monotonic transfer function of inter-layer is different from that of intra-layer, and inter-layered neurons and intra-layered neurons are updated alternately. We derive recursion relations for order parameters for those layered networks by the signal-to-noise ratio method. We clarify that the storage capacity and the generalization ability for those layered networks are enhanced in comparison with those with a conventional monotonic transfer function when non-monotonicity of the transfer functions is selected optimally. We also point out that some chaotic behavior appears in the order parameters for the layered networks when non-monotonicity of the transfer functions increases.

  9. Prediction of Double Layer Grids' Maximum Deflection Using Neural Networks

    Directory of Open Access Journals (Sweden)

    Reza K. Moghadas

    2008-01-01

    Full Text Available Efficient neural networks models are trained to predict the maximum deflection of two-way on two-way grids with variable geometrical parameters (span and height as well as cross-sectional areas of the element groups. Backpropagation (BP and Radial Basis Function (RBF neural networks are employed for the mentioned purpose. The inputs of the neural networks are the length of the spans, L, the height, h and cross-sectional areas of the all groups, A and the outputs are maximum deflections of the corresponding double layer grids, respectively. The numerical results indicate that the RBF neural network is better than BP in terms of training time and performance generality.

  10. The learning problem of multi-layer neural networks.

    Science.gov (United States)

    Ban, Jung-Chao; Chang, Chih-Hung

    2013-10-01

    This manuscript considers the learning problem of multi-layer neural networks (MNNs) with an activation function which comes from cellular neural networks. A systematic investigation of the partition of the parameter space is provided. Furthermore, the recursive formula of the transition matrix of an MNN is obtained. By implementing the well-developed tools in the symbolic dynamical systems, the topological entropy of an MNN can be computed explicitly. A novel phenomenon, the asymmetry of a topological diagram that was seen in Ban, Chang, Lin, and Lin (2009) [J. Differential Equations 246, pp. 552-580, 2009], is revealed.

  11. Efficient Convolutional Neural Network with Binary Quantization Layer

    OpenAIRE

    Ravanbakhsh, Mahdyar; Mousavi, Hossein; Nabi, Moin; Marcenaro, Lucio; Regazzoni, Carlo

    2016-01-01

    In this paper we introduce a novel method for segmentation that can benefit from general semantics of Convolutional Neural Network (CNN). Our segmentation proposes visually and semantically coherent image segments. We use binary encoding of CNN features to overcome the difficulty of the clustering on the high-dimensional CNN feature space. These binary encoding can be embedded into the CNN as an extra layer at the end of the network. This results in real-time segmentation. To the best of our ...

  12. Multi-Layer and Recursive Neural Networks for Metagenomic Classification.

    Science.gov (United States)

    Ditzler, Gregory; Polikar, Robi; Rosen, Gail

    2015-09-01

    Recent advances in machine learning, specifically in deep learning with neural networks, has made a profound impact on fields such as natural language processing, image classification, and language modeling; however, feasibility and potential benefits of the approaches to metagenomic data analysis has been largely under-explored. Deep learning exploits many layers of learning nonlinear feature representations, typically in an unsupervised fashion, and recent results have shown outstanding generalization performance on previously unseen data. Furthermore, some deep learning methods can also represent the structure in a data set. Consequently, deep learning and neural networks may prove to be an appropriate approach for metagenomic data. To determine whether such approaches are indeed appropriate for metagenomics, we experiment with two deep learning methods: i) a deep belief network, and ii) a recursive neural network, the latter of which provides a tree representing the structure of the data. We compare these approaches to the standard multi-layer perceptron, which has been well-established in the machine learning community as a powerful prediction algorithm, though its presence is largely missing in metagenomics literature. We find that traditional neural networks can be quite powerful classifiers on metagenomic data compared to baseline methods, such as random forests. On the other hand, while the deep learning approaches did not result in improvements to the classification accuracy, they do provide the ability to learn hierarchical representations of a data set that standard classification methods do not allow. Our goal in this effort is not to determine the best algorithm in terms accuracy-as that depends on the specific application-but rather to highlight the benefits and drawbacks of each of the approach we discuss and provide insight on how they can be improved for predictive metagenomic analysis.

  13. Two-Layer Feedback Neural Networks with Associative Memories

    Institute of Scientific and Technical Information of China (English)

    WU Gui-Kun; ZHAO Hong

    2008-01-01

    We construct a two-layer feedback neural network by a Monte Carlo based algorithm to store memories as fixed-point attractors or as limit-cycle attractors. Special attention is focused on comparing the dynamics of the network with limit-cycle attractors and with fixed-point attractors. It is found that the former has better retrieval property than the latter. Particularly, spurious memories may be suppressed completely when the memories are stored as a long-limit cycle. Potential application of limit-cycle-attractor networks is discussed briefly.

  14. Usage of neural network to predict aluminium oxide layer thickness.

    Science.gov (United States)

    Michal, Peter; Vagaská, Alena; Gombár, Miroslav; Kmec, Ján; Spišák, Emil; Kučerka, Daniel

    2015-01-01

    This paper shows an influence of chemical composition of used electrolyte, such as amount of sulphuric acid in electrolyte, amount of aluminium cations in electrolyte and amount of oxalic acid in electrolyte, and operating parameters of process of anodic oxidation of aluminium such as the temperature of electrolyte, anodizing time, and voltage applied during anodizing process. The paper shows the influence of those parameters on the resulting thickness of aluminium oxide layer. The impact of these variables is shown by using central composite design of experiment for six factors (amount of sulphuric acid, amount of oxalic acid, amount of aluminium cations, electrolyte temperature, anodizing time, and applied voltage) and by usage of the cubic neural unit with Levenberg-Marquardt algorithm during the results evaluation. The paper also deals with current densities of 1 A · dm(-2) and 3 A · dm(-2) for creating aluminium oxide layer.

  15. Antibacterial, anti-inflammatory and neuroprotective layer-by-layer coatings for neural implants

    Science.gov (United States)

    Zhang, Zhiling; Nong, Jia; Zhong, Yinghui

    2015-08-01

    Objective. Infection, inflammation, and neuronal loss are common issues that seriously affect the functionality and longevity of chronically implanted neural prostheses. Minocycline hydrochloride (MH) is a broad-spectrum antibiotic and effective anti-inflammatory drug that also exhibits potent neuroprotective activities. In this study, we investigated the development of biocompatible thin film coatings capable of sustained release of MH for improving the long term performance of implanted neural electrodes. Approach. We developed a novel magnesium binding-mediated drug delivery mechanism for controlled and sustained release of MH from an ultrathin hydrophilic layer-by-layer (LbL) coating and characterized the parameters that control MH loading and release. The anti-biofilm, anti-inflammatory and neuroprotective potencies of the LbL coating and released MH were also examined. Main results. Sustained release of physiologically relevant amount of MH for 46 days was achieved from the Mg2+-based LbL coating at a thickness of 1.25 μm. In addition, MH release from the LbL coating is pH-sensitive. The coating and released MH demonstrated strong anti-biofilm, anti-inflammatory, and neuroprotective potencies. Significance. This study reports, for the first time, the development of a bioactive coating that can target infection, inflammation, and neuroprotection simultaneously, which may facilitate the translation of neural interfaces to clinical applications.

  16. Multi-Layered Neural Networks Infer Fundamental Stellar Parameters

    CERN Document Server

    Verma, Kuldeep; Bhattacharya, Jishnu; Antia, H M; Krishnamurthy, Ganapathy

    2016-01-01

    The advent of space-based observatories such as CoRoT and Kepler has enabled the testing of our understanding of stellar evolution on thousands of stars. Evolutionary models typically require five input parameters, the mass, initial Helium abundance, initial metallicity, mixing-length (assumed to be constant over time) and the age to which the star must be evolved. These parameters are also very useful in characterizing the associated planets and in studying galactic archaeology. How to obtain the parameters from observations rapidly and accurately, specifically in the context of surveys of thousands of stars, is an outstanding question, one that has eluded straightforward resolution. For a given star, we typically measure the effective temperature and surface metallicity spectroscopically and low-degree oscillation frequencies through space observatories. Here we demonstrate that statistical learning, using multi-layered neural networks, is successful in determining the evolutionary parameters based on spect...

  17. A one-layer recurrent neural network for support vector machine learning.

    Science.gov (United States)

    Xia, Youshen; Wang, Jun

    2004-04-01

    This paper presents a one-layer recurrent neural network for support vector machine (SVM) learning in pattern classification and regression. The SVM learning problem is first converted into an equivalent formulation, and then a one-layer recurrent neural network for SVM learning is proposed. The proposed neural network is guaranteed to obtain the optimal solution of support vector classification and regression. Compared with the existing two-layer neural network for the SVM classification, the proposed neural network has a low complexity for implementation. Moreover, the proposed neural network can converge exponentially to the optimal solution of SVM learning. The rate of the exponential convergence can be made arbitrarily high by simply turning up a scaling parameter. Simulation examples based on benchmark problems are discussed to show the good performance of the proposed neural network for SVM learning.

  18. Learning text representation using recurrent convolutional neural network with highway layers

    OpenAIRE

    Wen, Ying; Zhang, Weinan; Luo, Rui; Wang, Jun

    2016-01-01

    Recently, the rapid development of word embedding and neural networks has brought new inspiration to various NLP and IR tasks. In this paper, we describe a staged hybrid model combining Recurrent Convolutional Neural Networks (RCNN) with highway layers. The highway network module is incorporated in the middle takes the output of the bi-directional Recurrent Neural Network (Bi-RNN) module in the first stage and provides the Convolutional Neural Network (CNN) module in the last stage with the i...

  19. Layer Winner-Take-All neural networks based on existing competitive structures.

    Science.gov (United States)

    Chen, C M; Yang, J F

    2000-01-01

    In this paper, we propose generalized layer winner-take-all (WTA) neural networks based on the suggested full WTA networks, which can be extended from any existing WTA structure with a simple weighted-and-sum neuron. With modular regularity and local connection, the layer WTA network in either hierarchical or recursive structure is suitable for a large number of competitors. The complexity and convergence performances of layer and direct WTA neural networks are analyzed. Simulation results and theoretical analyzes verify that the layer WTA neural networks with extendibility outperform their original direct WTA structures in aspects of low complexity and fast convergence.

  20. Optimizing the Flexural Strength of Beams Reinforced with Fiber Reinforced Polymer Bars Using Back-Propagation Neural Networks

    Directory of Open Access Journals (Sweden)

    Bahman O. Taha

    2015-06-01

    Full Text Available The reinforced concrete with fiber reinforced polymer (FRP bars (carbon, aramid, basalt and glass is used in places where a high ratio of strength to weight is required and corrosion is not acceptable. Behavior of structural members using (FRP bars is hard to be modeled using traditional methods because of the high non-linearity relationship among factors influencing the strength of structural members. Back-propagation neural network is a very effective method for modeling such complicated relationships. In this paper, back-propagation neural network is used for modeling the flexural behavior of beams reinforced with (FRP bars. 101 samples of beams reinforced with fiber bars were collected from literatures. Five important factors are taken in consideration for predicting the strength of beams. Two models of Multilayer Perceptron (MLP are created, first with single-hidden layer and the second with two-hidden layers. The two-hidden layer model showed better accuracy ratio than the single-hidden layer model. Parametric study has been done for two-hidden layer model only. Equations are derived to be used instead of the model and the importance of input factors is determined. Results showed that the neural network is successful in modeling the behavior of concrete beams reinforced with different types of (FRP bars.

  1. Supervised Learning of Logical Operations in Layered Spiking Neural Networks with Spike Train Encoding

    CERN Document Server

    Grüning, André

    2011-01-01

    Few algorithms for supervised training of spiking neural networks exist that can deal with patterns of multiple spikes, and their computational properties are largely unexplored. We demonstrate in a set of simulations that the ReSuMe learning algorithm can be successfully applied to layered neural networks. Input and output patterns are encoded as spike trains of multiple precisely timed spikes, and the network learns to transform the input trains into target output trains. This is done by combining the ReSuMe learning algorithm with multiplicative scaling of the connections of downstream neurons. We show in particular that layered networks with one hidden layer can learn the basic logical operations, including Exclusive-Or, while networks without hidden layer cannot, mirroring an analogous result for layered networks of rate neurons. While supervised learning in spiking neural networks is not yet fit for technical purposes, exploring computational properties of spiking neural networks advances our understand...

  2. NEURAL NETWORK FOR THE QUANTUM CORRECTION OF NANOSCALE SOI MOSFETS

    Institute of Scientific and Technical Information of China (English)

    Li Zunchao; Jiang Yaolin; Zhang Lili

    2006-01-01

    The quantum effect of carrier distribution in nanoscale SOI MOSFETs is evident and must be taken into consideration in device modeling and simulation. In this paper, a backpropagation neural network was applied to predict the quantum density of carriers from the classical density, and the influence of the network structure on training speed and accuracy was studied. It was concluded that a carefully trained neural network with two hidden layers using the Levenberg-Marquardt learning algorithm could predict the carrier quantum density of SOI MOSFETs in very good agreement with Schrdinger Poisson equations.

  3. Process optimization of gravure printed light-emitting polymer layers by a neural network approach

    NARCIS (Netherlands)

    Michels, J.J.; Winter, S.H.P.M. de; Symonds, L.H.G.

    2009-01-01

    We demonstrate that artificial neural network modeling is a viable tool to predict the processing dependence of gravure printed light-emitting polymer layers for flexible OLED lighting applications. The (local) thickness of gravure printed light-emitting polymer (LEP) layers was analyzed using micro

  4. Process optimization of gravure printed light-emitting polymer layers by a neural network approach

    NARCIS (Netherlands)

    Michels, J.J.; Winter, S.H.P.M. de; Symonds, L.H.G.

    2009-01-01

    We demonstrate that artificial neural network modeling is a viable tool to predict the processing dependence of gravure printed light-emitting polymer layers for flexible OLED lighting applications. The (local) thickness of gravure printed light-emitting polymer (LEP) layers was analyzed using

  5. Adaptive control of chaotic systems based on a single layer neural network

    Energy Technology Data Exchange (ETDEWEB)

    Shen Liqun [Space Control and Inertia Technology Research Center, Harbin Institute of Technology, Harbin 150001 (China)], E-mail: liqunshen@gmail.com; Wang Mao [Space Control and Inertia Technology Research Center, Harbin Institute of Technology, Harbin 150001 (China)

    2007-08-27

    This Letter presents an adaptive neural network control method for the chaos control problem. Based on a single layer neural network, the dynamic about the unstable fixed period point of the chaotic system can be adaptively identified without detailed information about the chaotic system. And the controlled chaotic system can be stabilized on the unstable fixed period orbit. Simulation results of Henon map and Lorenz system verify the effectiveness of the proposed control method.

  6. A One-Layer Recurrent Neural Network for Real-Time Portfolio Optimization With Probability Criterion.

    Science.gov (United States)

    Liu, Qingshan; Dang, Chuangyin; Huang, Tingwen

    2013-02-01

    This paper presents a decision-making model described by a recurrent neural network for dynamic portfolio optimization. The portfolio-optimization problem is first converted into a constrained fractional programming problem. Since the objective function in the programming problem is not convex, the traditional optimization techniques are no longer applicable for solving this problem. Fortunately, the objective function in the fractional programming is pseudoconvex on the feasible region. It leads to a one-layer recurrent neural network modeled by means of a discontinuous dynamic system. To ensure the optimal solutions for portfolio optimization, the convergence of the proposed neural network is analyzed and proved. In fact, the neural network guarantees to get the optimal solutions for portfolio-investment advice if some mild conditions are satisfied. A numerical example with simulation results substantiates the effectiveness and illustrates the characteristics of the proposed neural network.

  7. A one-layer recurrent neural network for constrained nonconvex optimization.

    Science.gov (United States)

    Li, Guocheng; Yan, Zheng; Wang, Jun

    2015-01-01

    In this paper, a one-layer recurrent neural network is proposed for solving nonconvex optimization problems subject to general inequality constraints, designed based on an exact penalty function method. It is proved herein that any neuron state of the proposed neural network is convergent to the feasible region in finite time and stays there thereafter, provided that the penalty parameter is sufficiently large. The lower bounds of the penalty parameter and convergence time are also estimated. In addition, any neural state of the proposed neural network is convergent to its equilibrium point set which satisfies the Karush-Kuhn-Tucker conditions of the optimization problem. Moreover, the equilibrium point set is equivalent to the optimal solution to the nonconvex optimization problem if the objective function and constraints satisfy given conditions. Four numerical examples are provided to illustrate the performances of the proposed neural network.

  8. A one-layer recurrent neural network for constrained nonsmooth invex optimization.

    Science.gov (United States)

    Li, Guocheng; Yan, Zheng; Wang, Jun

    2014-02-01

    Invexity is an important notion in nonconvex optimization. In this paper, a one-layer recurrent neural network is proposed for solving constrained nonsmooth invex optimization problems, designed based on an exact penalty function method. It is proved herein that any state of the proposed neural network is globally convergent to the optimal solution set of constrained invex optimization problems, with a sufficiently large penalty parameter. In addition, any neural state is globally convergent to the unique optimal solution, provided that the objective function and constraint functions are pseudoconvex. Moreover, any neural state is globally convergent to the feasible region in finite time and stays there thereafter. The lower bounds of the penalty parameter and convergence time are also estimated. Two numerical examples are provided to illustrate the performances of the proposed neural network.

  9. A two-layer recurrent neural network for nonsmooth convex optimization problems.

    Science.gov (United States)

    Qin, Sitian; Xue, Xiaoping

    2015-06-01

    In this paper, a two-layer recurrent neural network is proposed to solve the nonsmooth convex optimization problem subject to convex inequality and linear equality constraints. Compared with existing neural network models, the proposed neural network has a low model complexity and avoids penalty parameters. It is proved that from any initial point, the state of the proposed neural network reaches the equality feasible region in finite time and stays there thereafter. Moreover, the state is unique if the initial point lies in the equality feasible region. The equilibrium point set of the proposed neural network is proved to be equivalent to the Karush-Kuhn-Tucker optimality set of the original optimization problem. It is further proved that the equilibrium point of the proposed neural network is stable in the sense of Lyapunov. Moreover, from any initial point, the state is proved to be convergent to an equilibrium point of the proposed neural network. Finally, as applications, the proposed neural network is used to solve nonlinear convex programming with linear constraints and L1 -norm minimization problems.

  10. A neural network model for credit risk evaluation.

    Science.gov (United States)

    Khashman, Adnan

    2009-08-01

    Credit scoring is one of the key analytical techniques in credit risk evaluation which has been an active research area in financial risk management. This paper presents a credit risk evaluation system that uses a neural network model based on the back propagation learning algorithm. We train and implement the neural network to decide whether to approve or reject a credit application, using seven learning schemes and real world credit applications from the Australian credit approval datasets. A comparison of the system performance under the different learning schemes is provided, furthermore, we compare the performance of two neural networks; with one and two hidden layers following the ideal learning scheme. Experimental results suggest that neural networks can be effectively used in automatic processing of credit applications.

  11. Single-hidden-layer feed-forward quantum neural network based on Grover learning.

    Science.gov (United States)

    Liu, Cheng-Yi; Chen, Chein; Chang, Ching-Ter; Shih, Lun-Min

    2013-09-01

    In this paper, a novel single-hidden-layer feed-forward quantum neural network model is proposed based on some concepts and principles in the quantum theory. By combining the quantum mechanism with the feed-forward neural network, we defined quantum hidden neurons and connected quantum weights, and used them as the fundamental information processing unit in a single-hidden-layer feed-forward neural network. The quantum neurons make a wide range of nonlinear functions serve as the activation functions in the hidden layer of the network, and the Grover searching algorithm outstands the optimal parameter setting iteratively and thus makes very efficient neural network learning possible. The quantum neuron and weights, along with a Grover searching algorithm based learning, result in a novel and efficient neural network characteristic of reduced network, high efficient training and prospect application in future. Some simulations are taken to investigate the performance of the proposed quantum network and the result show that it can achieve accurate learning.

  12. A One-Layer Recurrent Neural Network for Pseudoconvex Optimization Problems With Equality and Inequality Constraints.

    Science.gov (United States)

    Qin, Sitian; Yang, Xiudong; Xue, Xiaoping; Song, Jiahui

    2017-10-01

    Pseudoconvex optimization problem, as an important nonconvex optimization problem, plays an important role in scientific and engineering applications. In this paper, a recurrent one-layer neural network is proposed for solving the pseudoconvex optimization problem with equality and inequality constraints. It is proved that from any initial state, the state of the proposed neural network reaches the feasible region in finite time and stays there thereafter. It is also proved that the state of the proposed neural network is convergent to an optimal solution of the related problem. Compared with the related existing recurrent neural networks for the pseudoconvex optimization problems, the proposed neural network in this paper does not need the penalty parameters and has a better convergence. Meanwhile, the proposed neural network is used to solve three nonsmooth optimization problems, and we make some detailed comparisons with the known related conclusions. In the end, some numerical examples are provided to illustrate the effectiveness of the performance of the proposed neural network.

  13. Learning behavior and temporary minima of two-layer neural networks

    NARCIS (Netherlands)

    Annema, Anne J.; Hoen, Klaas; Hoen, Klaas; Wallinga, Hans

    1994-01-01

    This paper presents a mathematical analysis of the occurrence of temporary minima during training of a single-output, two-layer neural network, with learning according to the back-propagation algorithm. A new vector decomposition method is introduced, which simplifies the mathematical analysis of

  14. Zero Cost Function Training Algorithms for Three-Layered Feedforward Neural Networks

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    In this paper, two theorems are proved for zero cost function (or precise I/O map ping) training algorithms about three-layered feedforward neural networks. Two training algorithms based on Moore-Penrose pseudoinverse (MPPI) matrix together with corresponding structure design guidelines are also proposed.

  15. Usage of Neural Network to Predict Aluminium Oxide Layer Thickness

    OpenAIRE

    2015-01-01

    This paper shows an influence of chemical composition of used electrolyte, such as amount of sulphuric acid in electrolyte, amount of aluminium cations in electrolyte and amount of oxalic acid in electrolyte, and operating parameters of process of anodic oxidation of aluminium such as the temperature of electrolyte, anodizing time, and voltage applied during anodizing process. The paper shows the influence of those parameters on the resulting thickness of aluminium oxide layer. The impact of...

  16. Synchronization and Inter-Layer Interactions of Noise-Driven Neural Networks.

    Science.gov (United States)

    Yuniati, Anis; Mai, Te-Lun; Chen, Chi-Ming

    2017-01-01

    In this study, we used the Hodgkin-Huxley (HH) model of neurons to investigate the phase diagram of a developing single-layer neural network and that of a network consisting of two weakly coupled neural layers. These networks are noise driven and learn through the spike-timing-dependent plasticity (STDP) or the inverse STDP rules. We described how these networks transited from a non-synchronous background activity state (BAS) to a synchronous firing state (SFS) by varying the network connectivity and the learning efficacy. In particular, we studied the interaction between a SFS layer and a BAS layer, and investigated how synchronous firing dynamics was induced in the BAS layer. We further investigated the effect of the inter-layer interaction on a BAS to SFS repair mechanism by considering three types of neuron positioning (random, grid, and lognormal distributions) and two types of inter-layer connections (random and preferential connections). Among these scenarios, we concluded that the repair mechanism has the largest effect for a network with the lognormal neuron positioning and the preferential inter-layer connections.

  17. Synchronization and Inter-Layer Interactions of Noise-Driven Neural Networks

    Science.gov (United States)

    Yuniati, Anis; Mai, Te-Lun; Chen, Chi-Ming

    2017-01-01

    In this study, we used the Hodgkin-Huxley (HH) model of neurons to investigate the phase diagram of a developing single-layer neural network and that of a network consisting of two weakly coupled neural layers. These networks are noise driven and learn through the spike-timing-dependent plasticity (STDP) or the inverse STDP rules. We described how these networks transited from a non-synchronous background activity state (BAS) to a synchronous firing state (SFS) by varying the network connectivity and the learning efficacy. In particular, we studied the interaction between a SFS layer and a BAS layer, and investigated how synchronous firing dynamics was induced in the BAS layer. We further investigated the effect of the inter-layer interaction on a BAS to SFS repair mechanism by considering three types of neuron positioning (random, grid, and lognormal distributions) and two types of inter-layer connections (random and preferential connections). Among these scenarios, we concluded that the repair mechanism has the largest effect for a network with the lognormal neuron positioning and the preferential inter-layer connections. PMID:28197088

  18. [Research on Early Identification of Bipolar Disorder Based on Multi-layer Perceptron Neural Network].

    Science.gov (United States)

    Zhang, Haowei; Gao, Yanni; Yuan, Chengmei; Liu, Ying; Ding, Yuqing

    2015-06-01

    Multi-layer perceptron (MLP) neural network belongs to multi-layer feedforward neural network, and has the ability and characteristics of high intelligence. It can realize the complex nonlinear mapping by its own learning through the network. Bipolar disorder is a serious mental illness with high recurrence rate, high self-harm rate and high suicide rate. Most of the onset of the bipolar disorder starts with depressive episode, which can be easily misdiagnosed as unipolar depression and lead to a delayed treatment so as to influence the prognosis. The early identifica- tion of bipolar disorder is of great importance for patients with bipolar disorder. Due to the fact that the process of early identification of bipolar disorder is nonlinear, we in this paper discuss the MLP neural network application in early identification of bipolar disorder. This study covered 250 cases, including 143 cases with recurrent depression and 107 cases with bipolar disorder, and clinical features were statistically analyzed between the two groups. A total of 42 variables with significant differences were screened as the input variables of the neural network. Part of the samples were randomly selected as the learning sample, and the other as the test sample. By choosing different neu- ral network structures, all results of the identification of bipolar disorder were relatively good, which showed that MLP neural network could be used in the early identification of bipolar disorder.

  19. Multi-layer holographic bifurcative neural network system for real-time adaptive EOS data analysis

    Science.gov (United States)

    Liu, Hua-Kuang; Huang, K. S.; Diep, J.

    1993-01-01

    Optical data processing techniques have the inherent advantage of high data throughout, low weight and low power requirements. These features are particularly desirable for onboard spacecraft in-situ real-time data analysis and data compression applications. the proposed multi-layer optical holographic neural net pattern recognition technique will utilize the nonlinear photorefractive devices for real-time adaptive learning to classify input data content and recognize unexpected features. Information can be stored either in analog or digital form in a nonlinear photofractive device. The recording can be accomplished in time scales ranging from milliseconds to microseconds. When a system consisting of these devices is organized in a multi-layer structure, a feedforward neural net with bifurcating data classification capability is formed. The interdisciplinary research will involve the collaboration with top digital computer architecture experts at the University of Southern California.

  20. 3D Polygon Mesh Compression with Multi Layer Feed Forward Neural Networks

    Directory of Open Access Journals (Sweden)

    Emmanouil Piperakis

    2003-06-01

    Full Text Available In this paper, an experiment is conducted which proves that multi layer feed forward neural networks are capable of compressing 3D polygon meshes. Our compression method not only preserves the initial accuracy of the represented object but also enhances it. The neural network employed includes the vertex coordinates, the connectivity and normal information in one compact form, converting the discrete and surface polygon representation into an analytic, solid colloquial. Furthermore, the 3D object in its compressed neural form can be directly - without decompression - used for rendering. The neural compression - representation is viable to 3D transformations without the need of any anti-aliasing techniques - transformations do not disrupt the accuracy of the geometry. Our method does not su.er any scaling problem and was tested with objects of 300 to 107 polygons - such as the David of Michelangelo - achieving in all cases an order of O(b3 less bits for the representation than any other commonly known compression method. The simplicity of our algorithm and the established mathematical background of neural networks combined with their aptness for hardware implementation can establish this method as a good solution for polygon compression and if further investigated, a novel approach for 3D collision, animation and morphing.

  1. Hypothetical Pattern Recognition Design Using Multi-Layer Perceptorn Neural Network For Supervised Learning

    Directory of Open Access Journals (Sweden)

    Md. Abdullah-al-mamun

    2015-08-01

    Full Text Available Abstract Humans are capable to identifying diverse shape in the different pattern in the real world as effortless fashion due to their intelligence is grow since born with facing several learning process. Same way we can prepared an machine using human like brain called Artificial Neural Network that can be recognize different pattern from the real world object. Although the various techniques is exists to implementation the pattern recognition but recently the artificial neural network approaches have been giving the significant attention. Because the approached of artificial neural network is like a human brain that is learn from different observation and give a decision the previously learning rule. Over the 50 years research now a days pattern recognition for machine learning using artificial neural network got a significant achievement. For this reason many real world problem can be solve by modeling the pattern recognition process. The objective of this paper is to present the theoretical concept for pattern recognition design using Multi-Layer Perceptorn neural networkin the algorithm of artificial Intelligence as the best possible way of utilizing available resources to make a decision that can be a human like performance.

  2. Neural network model for the efficient calculation of Green's functions in layered media

    CERN Document Server

    Soliman, E A; El-Gamal, M A; 10.1002/mmce.10066

    2003-01-01

    In this paper, neural networks are employed for fast and efficient calculation of Green's functions in a layered medium. Radial basis function networks (RBFNs) are effectively trained to estimate the coefficients and the exponents that represent a Green's function in the discrete complex image method (DCIM). Results show very good agreement with the DCIM, and the trained RBFNs are very fast compared with the corresponding DCIM. (23 refs).

  3. MIMO Channel Estimation and Equalization Using Three-Layer Neural Networks with Feedback

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    This paper describes a channel estimation and equalization algorithm using three-layer artificial neural networks (ANNs) with feedback for multiple input multiple output wireless communication systems.An ANN structure with feedback was designed to use different learning algorithms in the different ANN layers. This actually forms a Turbo iteration process between the different algorithms which effectively improves the estimation performance of the channel equalizer. Simulation results show that this channel equalization algorithm has better computational efficiency and faster convergence than higher order statistics based algorithms.

  4. Thickness gauging of thin layers by laser ultrasonics and neural network

    Energy Technology Data Exchange (ETDEWEB)

    Lefevre, F; Jenot, F; Ouaftouh, M; Duquennoy, M; Ourak, M, E-mail: Fabien.lefevre@univ-valenciennes.fr [Institut d' Electronique de Microelectronique et de Nanotechnologie Departement Opto-Acousto-Electronique (UMR CNRS 8520), Universite de Valenciennes, Le Mont Houy, 59313 Valenciennes Cedex 09 (France)

    2011-01-01

    Non destructive testing has been performed on a thin indium layer deposited on a two inches silicon wafer. Guided waves were generated and studied using a laser ultrasonic setup, and a two-dimensional Fourier transform technique was employed to obtain the dispersion curves. The inverse problem, in other words the determination of the layer thickness and the elastic constants of the substrate, has been solved by means of a feedforward neural network. These parameters were then evaluated simultaneously, the dispersion curves being entirely fitted. The experimental results show a good agreement with the theoretical model. This inversion method was found to be prompt and easy to automate.

  5. Solving Nonlinearly Separable Classifications in a Single-Layer Neural Network.

    Science.gov (United States)

    Conaway, Nolan; Kurtz, Kenneth J

    2017-03-01

    Since the work of Minsky and Papert ( 1969 ), it has been understood that single-layer neural networks cannot solve nonlinearly separable classifications (i.e., XOR). We describe and test a novel divergent autoassociative architecture capable of solving nonlinearly separable classifications with a single layer of weights. The proposed network consists of class-specific linear autoassociators. The power of the model comes from treating classification problems as within-class feature prediction rather than directly optimizing a discriminant function. We show unprecedented learning capabilities for a simple, single-layer network (i.e., solving XOR) and demonstrate that the famous limitation in acquiring nonlinearly separable problems is not just about the need for a hidden layer; it is about the choice between directly predicting classes or learning to classify indirectly by predicting features.

  6. Object recognition using deep convolutional neural networks with complete transfer and partial frozen layers

    Science.gov (United States)

    Kruithof, Maarten C.; Bouma, Henri; Fischer, Noëlle M.; Schutte, Klamer

    2016-10-01

    Object recognition is important to understand the content of video and allow flexible querying in a large number of cameras, especially for security applications. Recent benchmarks show that deep convolutional neural networks are excellent approaches for object recognition. This paper describes an approach of domain transfer, where features learned from a large annotated dataset are transferred to a target domain where less annotated examples are available as is typical for the security and defense domain. Many of these networks trained on natural images appear to learn features similar to Gabor filters and color blobs in the first layer. These first-layer features appear to be generic for many datasets and tasks while the last layer is specific. In this paper, we study the effect of copying all layers and fine-tuning a variable number. We performed an experiment with a Caffe-based network on 1000 ImageNet classes that are randomly divided in two equal subgroups for the transfer from one to the other. We copy all layers and vary the number of layers that is fine-tuned and the size of the target dataset. We performed additional experiments with the Keras platform on CIFAR-10 dataset to validate general applicability. We show with both platforms and both datasets that the accuracy on the target dataset improves when more target data is used. When the target dataset is large, it is beneficial to freeze only a few layers. For a large target dataset, the network without transfer learning performs better than the transfer network, especially if many layers are frozen. When the target dataset is small, it is beneficial to transfer (and freeze) many layers. For a small target dataset, the transfer network boosts generalization and it performs much better than the network without transfer learning. Learning time can be reduced by freezing many layers in a network.

  7. Automatic detection of photoresist residual layer in lithography using a neural classification approach

    KAUST Repository

    Gereige, Issam

    2012-09-01

    Photolithography is a fundamental process in the semiconductor industry and it is considered as the key element towards extreme nanoscale integration. In this technique, a polymer photo sensitive mask with the desired patterns is created on the substrate to be etched. Roughly speaking, the areas to be etched are not covered with polymer. Thus, no residual layer should remain on these areas in order to insure an optimal transfer of the patterns on the substrate. In this paper, we propose a nondestructive method based on a classification approach achieved by artificial neural network for automatic residual layer detection from an ellipsometric signature. Only the case of regular defect, i.e. homogenous residual layer, will be considered. The limitation of the method will be discussed. Then, an experimental result on a 400 nm period grating manufactured with nanoimprint lithography is analyzed with our method. © 2012 Elsevier B.V. All rights reserved.

  8. A Fusion Face Recognition Approach Based on 7-Layer Deep Learning Neural Network

    Directory of Open Access Journals (Sweden)

    Jianzheng Liu

    2016-01-01

    Full Text Available This paper presents a method for recognizing human faces with facial expression. In the proposed approach, a motion history image (MHI is employed to get the features in an expressive face. The face can be seen as a kind of physiological characteristic of a human and the expressions are behavioral characteristics. We fused the 2D images of a face and MHIs which were generated from the same face’s image sequences with expression. Then the fusion features were used to feed a 7-layer deep learning neural network. The previous 6 layers of the whole network can be seen as an autoencoder network which can reduce the dimension of the fusion features. The last layer of the network can be seen as a softmax regression; we used it to get the identification decision. Experimental results demonstrated that our proposed method performs favorably against several state-of-the-art methods.

  9. A TWO-LAYER RECURRENT NEURAL NETWORK BASED APPROACH FOR OVERLAY MULTICAST

    Institute of Scientific and Technical Information of China (English)

    Liu Shidong; Zhang Shunyi; Zhou Jinquan; Qiu Gong'an

    2008-01-01

    Overlay multicast has become one of the most promising multicast solutions for IP network, and Neutral Network(NN) has been a good candidate for searching optimal solutions to the constrained shortest routing path in virtue of its powerful capacity for parallel computation. Though traditional Hopfield NN can tackle the optimization problem, it is incapable of dealing with large scale networks due to the large number of neurons. In this paper, a neural network for overlay multicast tree computation is presented to reliably implement routing algorithm in real time. The neural network is constructed as a two-layer recurrent architecture, which is comprised of Independent Variable Neurons (IDVN) and Dependent Variable Neurons (DVN), according to the independence of the decision variables associated with the edges in directed graph. Compared with the heuristic routing algorithms, it is characterized as shorter computational time, fewer neurons, and better precision.

  10. Highly Accurate Multi-layer Perceptron Neural Network for Air Data System

    Directory of Open Access Journals (Sweden)

    H. S. Krishna

    2009-11-01

    Full Text Available The error backpropagation multi-layer perceptron algorithm is revisited. This algorithm is used to train and validate two models of three-layer neural networks that can be used to calibrate a 5-hole pressure probe. This paper addresses Occam's Razor problem as it describes the adhoc training methodology applied to improve accuracy and sensitivity. The trained outputs from 5-4-3 feed-forward network architecture with jump connection are comparable to second decimal digit (~0.05 accuracy, hitherto unreported in literature.Defence Science Journal, 2009, 59(6, pp.670-674, DOI:http://dx.doi.org/10.14429/dsj.59.1574

  11. Design of Jetty Piles Using Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Yongjei Lee

    2014-01-01

    Full Text Available To overcome the complication of jetty pile design process, artificial neural networks (ANN are adopted. To generate the training samples for training ANN, finite element (FE analysis was performed 50 times for 50 different design cases. The trained ANN was verified with another FE analysis case and then used as a structural analyzer. The multilayer neural network (MBPNN with two hidden layers was used for ANN. The framework of MBPNN was defined as the input with the lateral forces on the jetty structure and the type of piles and the output with the stress ratio of the piles. The results from the MBPNN agree well with those from FE analysis. Particularly for more complex modes with hundreds of different design cases, the MBPNN would possibly substitute parametric studies with FE analysis saving design time and cost.

  12. Respiratory signal prediction based on adaptive boosting and multi-layer perceptron neural network

    Science.gov (United States)

    Sun, W. Z.; Jiang, M. Y.; Ren, L.; Dang, J.; You, T.; Yin, F.-F.

    2017-09-01

    To improve the prediction accuracy of respiratory signals using adaptive boosting and multi-layer perceptron neural network (ADMLP-NN) for gated treatment of moving target in radiation therapy. The respiratory signals acquired using a real-time position management (RPM) device from 138 previous 4DCT scans were retrospectively used in this study. The ADMLP-NN was composed of several artificial neural networks (ANNs) which were used as weaker predictors to compose a stronger predictor. The respiratory signal was initially smoothed using a Savitzky-Golay finite impulse response smoothing filter (S-G filter). Then, several similar multi-layer perceptron neural networks (MLP-NNs) were configured to estimate future respiratory signal position from its previous positions. Finally, an adaptive boosting (Adaboost) decision algorithm was used to set weights for each MLP-NN based on the sample prediction error of each MLP-NN. Two prediction methods, MLP-NN and ADMLP-NN (MLP-NN plus adaptive boosting), were evaluated by calculating correlation coefficient and root-mean-square-error between true and predicted signals. For predicting 500 ms ahead of prediction, average correlation coefficients were improved from 0.83 (MLP-NN method) to 0.89 (ADMLP-NN method). The average of root-mean-square-error (relative unit) for 500 ms ahead of prediction using ADMLP-NN were reduced by 27.9%, compared to those using MLP-NN. The preliminary results demonstrate that the ADMLP-NN respiratory prediction method is more accurate than the MLP-NN method and can improve the respiration prediction accuracy.

  13. A one-layer recurrent neural network with a discontinuous hard-limiting activation function for quadratic programming.

    Science.gov (United States)

    Liu, Q; Wang, J

    2008-04-01

    In this paper, a one-layer recurrent neural network with a discontinuous hard-limiting activation function is proposed for quadratic programming. This neural network is capable of solving a large class of quadratic programming problems. The state variables of the neural network are proven to be globally stable and the output variables are proven to be convergent to optimal solutions as long as the objective function is strictly convex on a set defined by the equality constraints. In addition, a sequential quadratic programming approach based on the proposed recurrent neural network is developed for general nonlinear programming. Simulation results on numerical examples and support vector machine (SVM) learning show the effectiveness and performance of the neural network.

  14. A Three-layered Self-Organizing Map Neural Network for Clustering Analysis

    Directory of Open Access Journals (Sweden)

    Sheng-Chai Chi

    2003-12-01

    Full Text Available In the commercial world today, holding the effective information through information technology (IT and the internet is a very important indicator of whether an enterprise has competitive advantage in business. Clustering analysis, a technique for data mining or data analysis in databases, has been widely applied in various areas. Its purpose is to segment the individuals in the same population according to their characteristics. In this research, an enhanced three-layered self-organizing map neural network, called 3LSOM, is developed to overcome the drawback of the conventional two-layered SOM through sight-inspection after the mapping process. To further verify its feasibility, the proposed model is applied to two common problems: the identification of four given groups of work-part images and the clustering of a machine/part incidence matrix. The experimental results prove that the data that belong to the same group can be mapped to the same neuron on the output layer of the 3LSOM. Its performance in clustering accuracy is good and is also comparable with that of the FSOM, FCM and k-Means.

  15. Using Layer Recurrent Neural Network to Generate Pseudo Random Number Sequences

    Directory of Open Access Journals (Sweden)

    Veena Desai

    2012-03-01

    Full Text Available Pseudo Random Numbers (PRNs are required for many cryptographic applications. This paper proposes a new method for generating PRNs using Layer Recurrent Neural Network (LRNN. The proposed technique generates PRNs from the weight matrix obtained from the layer weights of the LRNN. The LRNN random number generator (RNG uses a short keyword as a seed and generates a long sequence as a pseudo PRN sequence. The number of bits generated in the PRN sequence depends on the number of neurons in the input layer of the LRNN. The generated PRN sequence changes, with a change in the training function of the LRNN .The sequences generated are a function of the keyword, initial state of network and the training function. In our implementation the PRN sequences have been generated using 3 training functions: 1Scaled Gradient Descent 2Levenberg-Marquartz (TRAINLM and 3 TRAINBGF. The generated sequences are tested for randomness using ENT and NIST test suites. The ENT test can be applied for sequences of small size. NIST has 16 tests to test random numbers. The LRNN generated PRNs pass in 11 tests, show no observations for 4 tests, and fail in 1 test when subjected to NIST .This paper presents the test results for random number sequence ranging from 25 bits to 1000 bits, generated using LRNN.

  16. A design philosophy for multi-layer neural networks with applications to robot control

    Science.gov (United States)

    Vadiee, Nader; Jamshidi, MO

    1989-01-01

    A system is proposed which receives input information from many sensors that may have diverse scaling, dimension, and data representations. The proposed system tolerates sensory information with faults. The proposed self-adaptive processing technique has great promise in integrating the techniques of artificial intelligence and neural networks in an attempt to build a more intelligent computing environment. The proposed architecture can provide a detailed decision tree based on the input information, information stored in a long-term memory, and the adapted rule-based knowledge. A mathematical model for analysis will be obtained to validate the cited hypotheses. An extensive software program will be developed to simulate a typical example of pattern recognition problem. It is shown that the proposed model displays attention, expectation, spatio-temporal, and predictory behavior which are specific to the human brain. The anticipated results of this research project are: (1) creation of a new dynamic neural network structure, and (2) applications to and comparison with conventional multi-layer neural network structures. The anticipated benefits from this research are vast. The model can be used in a neuro-computer architecture as a building block which can perform complicated, nonlinear, time-varying mapping from a multitude of input excitory classes to an output or decision environment. It can be used for coordinating different sensory inputs and past experience of a dynamic system and actuating signals. The commercial applications of this project can be the creation of a special-purpose neuro-computer hardware which can be used in spatio-temporal pattern recognitions in such areas as air defense systems, e.g., target tracking, and recognition. Potential robotics-related applications are trajectory planning, inverse dynamics computations, hierarchical control, task-oriented control, and collision avoidance.

  17. Identification of Determinants for Globalization of SMEs using Multi-Layer Perceptron Neural Networks

    Directory of Open Access Journals (Sweden)

    Umar Draz

    2016-01-01

    Full Text Available SMEs (Small and Medium Sized Enterprises sector is facing problems relating to implementation of international quality standards. These SMEs need to identify factors affecting business success abroad for intelligent allocation of resources to the process of internationalization. In this paper, MLP NN (Multi-Layer Perceptron Neural Network has been used for identifying relative importance of key variables related to firm basics, manufacturing, quality inspection labs and level of education in determining the exporting status of Pakistani SMEs. A survey has been conducted for scoring out the pertinent variables in SMEs and coded in MLP NNs. It is found that ?firm registered with OEM (Original Equipment Manufacturer and ?size of firm? are the most important in determining exporting status of SMEs followed by other variables. For internationalization, the results aid policy makers in formulating strategies

  18. [Input layer self-construction neural network and its use in multivariant calibration of infrared spectra].

    Science.gov (United States)

    Gao, J B; Hu, X Y; Hu, D C

    2001-12-01

    In order to solve the problems of feature extraction and calibration modelling in the area of quantitatively infrared spectra analysis, an input layer self-constructive neural network (ILSC-NN) is proposed. Before the NN training process, the training data is firstly analyzed and some prior knowledge about the problem is obtained. During the training process, the number of the input neurons is determined adaptively based on the prior knowledge. Meantime, the network parameters are also determined. This algorithm of the NN model helps to increase the efficiency of calibration modelling. The test experiment of quantitative analysis using simulated spectral data showed that this modelling method could not only achieve efficient wavelength selection, but also remarkably reduce the random and non-linear noises.

  19. Applications of feedforward multilayer perceptron artificial neural networks and empirical correlation for prediction of thermal conductivity of Mg(OH)2–EG using experimental data

    DEFF Research Database (Denmark)

    Hemmat Esfe, Mohammad; Afrand, Masoud; Wongwises, Somchai

    2015-01-01

    was developed via neural network based on the measured data. A network with two hidden layers and 5 neurons in each layer has the lowest error and highest fitting coefficient. By comparing the performance of the neural network model and the correlation derived from empirical data, it was revealed......This paper presents an investigation on the thermal conductivity of nanofluids using experimental data, neural networks, and correlation for modeling thermal conductivity. The thermal conductivity of Mg(OH)2 nanoparticles with mean diameter of 10 nm dispersed in ethylene glycol was determined...... by using a KD2-pro thermal analyzer. Based on the experimental data at different solid volume fractions and temperatures, an experimental correlation is proposed in terms of volume fraction and temperature. Then, the model of relative thermal conductivity as a function of volume fraction and temperature...

  20. Assessing artificial neural networks coupled with waveletanalysis for multi-layer soil moisture dynamics pr

    Institute of Scientific and Technical Information of China (English)

    2016-01-01

    Soil moisture simulation and prediction in semi-arid regions are important for agricultural production, soil conservation andclimate change. However, considerable heterogeneity in the spatial distribution of soil moisture, and poor ability of distributedhydrological models to estimate it, severely impact the use of soil moisture models in research and practical applications. Inthis study, a newly-developed technique of coupled (WA-ANN) wavelet analysis (WA) and artificial neural network (ANN)was applied for a multi-layer soil moisture simulation in the Pailugou catchment of the Qilian Mountains, Gansu Province,China. Datasets included seven meteorological factors: air and land surface temperatures, relative humidity, global radiation,atmospheric pressure, wind speed, precipitation, and soil water content at 20, 40, 60, 80, 120 and 160 cm. To investigate theeffectiveness of WA-ANN, ANN was applied by itself to conduct a comparison. Three main findings of this study were: (1)ANN and WA-ANN provided a statistically reliable and robust prediction of soil moisture in both the root zone and deepestsoil layer studied (NSE 〉0.85, NSE means Nash-Sutcliffe Efficiency coefficient); (2) when input meteorological factors weretransformed using maximum signal to noise ratio (SNR) and one-dimensional auto de-noising algorithm (heursure) in WA,the coupling technique improved the performance of ANN especially for soil moisture at 160 cm depth; (3) the results ofmulti-layer soil moisture prediction indicated that there may be different sources of water at different soil layers, and this canbe used as an indicator of the maximum impact depth of meteorological factors on the soil water content at this study site. Weconclude that our results show that appropriate simulation methodology can provide optimal simulation with a minimumdistortion of the raw-time series; the new method used here is applicable to soil sciences and management

  1. Assessing artificial neural networks coupled with wavelet analysis for multi-layer soil moisture dynamics prediction

    Institute of Scientific and Technical Information of China (English)

    JunJun Yang; ZhiBin He; WeiJun Zhao; Jun Du; LongFei Chen; Xi Zhu

    2016-01-01

    Soil moisture simulation and prediction in semi-arid regions are important for agricultural production, soil conservation and climate change. However, considerable heterogeneity in the spatial distribution of soil moisture, and poor ability of distributed hydrological models to estimate it, severely impact the use of soil moisture models in research and practical applications. In this study, a newly-developed technique of coupled (WA-ANN) wavelet analysis (WA) and artificial neural network (ANN) was applied for a multi-layer soil moisture simulation in the Pailugou catchment of the Qilian Mountains, Gansu Province, China. Datasets included seven meteorological factors: air and land surface temperatures, relative humidity, global radiation, atmospheric pressure, wind speed, precipitation, and soil water content at 20, 40, 60, 80, 120 and 160 cm. To investigate the effectiveness of WA-ANN, ANN was applied by itself to conduct a comparison. Three main findings of this study were: (1) ANN and WA-ANN provided a statistically reliable and robust prediction of soil moisture in both the root zone and deepest soil layer studied (NSE >0.85, NSE means Nash-Sutcliffe Efficiency coefficient); (2) when input meteorological factors were transformed using maximum signal to noise ratio (SNR) and one-dimensional auto de-noising algorithm (heursure) in WA, the coupling technique improved the performance of ANN especially for soil moisture at 160 cm depth; (3) the results of multi-layer soil moisture prediction indicated that there may be different sources of water at different soil layers, and this can be used as an indicator of the maximum impact depth of meteorological factors on the soil water content at this study site. We conclude that our results show that appropriate simulation methodology can provide optimal simulation with a minimum distortion of the raw-time series; the new method used here is applicable to soil sciences and management applications.

  2. An optimized recursive learning algorithm for three-layer feedforward neural networks for mimo nonlinear system identifications

    CERN Document Server

    Sha, Daohang

    2010-01-01

    Back-propagation with gradient method is the most popular learning algorithm for feed-forward neural networks. However, it is critical to determine a proper fixed learning rate for the algorithm. In this paper, an optimized recursive algorithm is presented for online learning based on matrix operation and optimization methods analytically, which can avoid the trouble to select a proper learning rate for the gradient method. The proof of weak convergence of the proposed algorithm also is given. Although this approach is proposed for three-layer, feed-forward neural networks, it could be extended to multiple layer feed-forward neural networks. The effectiveness of the proposed algorithms applied to the identification of behavior of a two-input and two-output non-linear dynamic system is demonstrated by simulation experiments.

  3. The development of a knowledge base in an expert system based on the four-layer perceptron neural network

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Owing to continuous production lines with large amount of consecutive controls, various control signals and huge logistic relations, this paper introduced the methods and principles of the development of knowledge base in a fault diagnosis expert system that was based on machine learning by the four-layer perceptron neural network. An example was presented. By combining differential function with not differentia function and back propagation of error with back propagation of expectation, the four-layer perceptron neural network was established. And it was good for solving such a bottleneck problem in knowledge acquisition in expert system and enhancing real-time on-line diagnosis. A method of synthetic back propagation was designed, which broke the limit to non-differentiable function in BP neural network.

  4. Foundations of implementing the competitive layer model by Lotka-Volterra recurrent neural networks.

    Science.gov (United States)

    Yi, Zhang

    2010-03-01

    The competitive layer model (CLM) can be described by an optimization problem. The problem can be further formulated by an energy function, called the CLM energy function, in the subspace of nonnegative orthant. The set of minimum points of the CLM energy function forms the set of solutions of the CLM problem. Solving the CLM problem means to find out such solutions. Recurrent neural networks (RNNs) can be used to implement the CLM to solve the CLM problem. The key point is to make the set of minimum points of the CLM energy function just correspond to the set of stable attractors of the recurrent neural networks. This paper proposes to use Lotka-Volterra RNNs (LV RNNs) to implement the CLM. The contribution of this paper is to establish foundations of implementing the CLM by LV RNNs. The contribution mainly contains three parts. The first part is on the CLM energy function. Necessary and sufficient conditions for minimum points of the CLM energy function are established by detailed study. The second part is on the convergence of the proposed model of the LV RNNs. It is proven that interesting trajectories are convergent. The third part is the most important. It proves that the set of stable attractors of the proposed LV RNN just equals the set of minimum points of the CLM energy function in the nonnegative orthant. Thus, the LV RNNs can be used to solve the problem of the CLM. It is believed that by establishing such basic rigorous theories, more and interesting applications of the CLM can be found.

  5. A linear approach for sparse coding by a two-layer neural network

    CERN Document Server

    Montalto, Alessandro; Prevete, Roberto

    2015-01-01

    Many approaches to transform classification problems from non-linear to linear by feature transformation have been recently presented in the literature. These notably include sparse coding methods and deep neural networks. However, many of these approaches require the repeated application of a learning process upon the presentation of unseen data input vectors, or else involve the use of large numbers of parameters and hyper-parameters, which must be chosen through cross-validation, thus increasing running time dramatically. In this paper, we propose and experimentally investigate a new approach for the purpose of overcoming limitations of both kinds. The proposed approach makes use of a linear auto-associative network (called SCNN) with just one hidden layer. The combination of this architecture with a specific error function to be minimized enables one to learn a linear encoder computing a sparse code which turns out to be as similar as possible to the sparse coding that one obtains by re-training the neura...

  6. Using Multi-input-layer Wavelet Neural Network to Model Product Quality of Continuous Casting Furnace and Hot Rolling Mill

    Institute of Scientific and Technical Information of China (English)

    HuanqinLi; JieCheng; BaiwuWan

    2004-01-01

    A new architecture of wavelet neural network with multi-input-layer is proposed and implemented for modeling a class of large-scale industrial processes. Because the processes are very complicated and the number of technological parameters, which determine the final product quality, is quite large, and these parameters do not make actions at the same time but work in different procedures, the conventional feed-forward neural networks cannot model this set of problems efficiently. The network presented in this paper has several input-layers according to the sequence of work procedure in large-scale industrial production processes. The performance of such networks is analyzed and the network is applied to model the steel plate quality of continuous casting furnace and hot rolling mill. Simulation results indicate that the developed methodology is competent and has well prospects to this set of problems.

  7. Predicting the Grouting Ability of Sandy Soils by Artificial Neural Networks Based On Experimental Tests

    Directory of Open Access Journals (Sweden)

    Mahmoud Hassanlourad

    2014-12-01

    Full Text Available In this paper, the grouting ability of sandy soils is investigated by artificial neural networks based on the results of chemical grout injection tests. In order to evaluate the soil grouting potential, experimental samples were prepared and then injected. The sand samples with three different particle sizes (medium, fine, and silty and three relative densities (%30, %50, and %90 were injected with the sodium silicate grout with three different concentrations (water to sodium silicate ratio of 0.33, 1, and 2. A multi-layer Perceptron type of the artificial neural network was trained and tested using the results of 138 experimental tests. The multi-layer Perceptron included one input layer, two hidden layers and one output layer. The input parameters consisted of initial relative densities of grouted samples, the average size of particles (D50, the ratio of the grout water to sodium silicate and the grout pressure. The output parameter was the grout injection radius. The results of the experimental tests showed that the radius of grout injection is a complicated function of the mentioned parameters. In addition, the results of the trained artificial neural network showed to be reasonably consistent with the experimental results.

  8. Neural network payload estimation for adaptive robot control.

    Science.gov (United States)

    Leahy, M R; Johnson, M A; Rogers, S K

    1991-01-01

    A concept is proposed for utilizing artificial neural networks to enhance the high-speed tracking accuracy of robotic manipulators. Tracking accuracy is a function of the controller's ability to compensate for disturbances produced by dynamical interactions between the links. A model-based control algorithm uses a nominal model of those dynamical interactions to reduce the disturbances. The problem is how to provide accurate dynamics information to the controller in the presence of payload uncertainty and modeling error. Neural network payload estimation uses a series of artificial neural networks to recognize the payload variation associated with a degradation in tracking performance. The network outputs are combined with a knowledge of nominal dynamics to produce a computationally efficient direct form of adaptive control. The concept is validated through experimentation and analysis on the first three links of a PUMA-560 manipulator. A multilayer perceptron architecture with two hidden layers is used. Integration of the principles of neural network pattern recognition and model-based control produces a tracking algorithm with enhanced robustness to incomplete dynamic information. Tracking efficacy and applicability to robust control algorithms are discussed.

  9. Dynamic boundary layer based neural network quasi-sliding mode control for soft touching down on asteroid

    Science.gov (United States)

    Liu, Xiaosong; Shan, Zebiao; Li, Yuanchun

    2017-04-01

    Pinpoint landing is a critical step in some asteroid exploring missions. This paper is concerned with the descent trajectory control for soft touching down on a small irregularly-shaped asteroid. A dynamic boundary layer based neural network quasi-sliding mode control law is proposed to track a desired descending path. The asteroid's gravitational acceleration acting on the spacecraft is described by the polyhedron method. Considering the presence of input constraint and unmodeled acceleration, the dynamic equation of relative motion is presented first. The desired descending path is planned using cubic polynomial method, and a collision detection algorithm is designed. To perform trajectory tracking, a neural network sliding mode control law is given first, where the sliding mode control is used to ensure the convergence of system states. Two radial basis function neural networks (RBFNNs) are respectively used as an approximator for the unmodeled term and a compensator for the difference between the actual control input with magnitude constraint and nominal control. To improve the chattering induced by the traditional sliding mode control and guarantee the reachability of the system, a specific saturation function with dynamic boundary layer is proposed to replace the sign function in the preceding control law. Through the Lyapunov approach, the reachability condition of the control system is given. The improved control law can guarantee the system state move within a gradually shrinking quasi-sliding mode band. Numerical simulation results demonstrate the effectiveness of the proposed control strategy.

  10. Identification model of multi-layered neural network parameters and its applications in the petroleum production

    Institute of Scientific and Technical Information of China (English)

    Liu Ranbing; Liu Leiming; Zhang Faqiang; Li Changhua

    2008-01-01

    This paper creates a LM (Levenberg-Marquardt) algorithm model which is appropriate to solve the problem a-bout weights value of feedforward neural network. On the base of this model, we provide two applications in the oilfield production. Firstly, we simulated the functional relationships between the petrophysical and electrical properties of the rock by neural networks model, and studied oil saturation. Under the precision of data is confirmed, this method can re-duce the number of experiments. Secondly, we simulated the relationships between investment and income by the neural networks model, and studied invest saturation point and income growth rate. It is very significant to guide the investment decision. The research result shows that the model is suitable for the modeling and identification of nonlinear systems due to the great fit characteristic of neural network and very fast convergence speed of LM algorithm.

  11. Evaluation of 1-D tracer concentration profile in a small river by means of Multi-Layer Perceptron Neural Networks

    Directory of Open Access Journals (Sweden)

    A. Piotrowski

    2007-12-01

    Full Text Available The prediction of temporal concentration profiles of a transported pollutant in a river is still a subject of ongoing research efforts worldwide. The present paper is aimed at studying the possibility of using Multi-Layer Perceptron Neural Networks to evaluate the whole concentration versus time profile at several cross-sections of a river under various flow conditions, using as little information about the river system as possible. In contrast with the earlier neural networks based work on longitudinal dispersion coefficients, this new approach relies more heavily on measurements of concentration collected during tracer tests over a range of flow conditions, but fewer hydraulic and morphological data are needed. The study is based upon 26 tracer experiments performed in a small river in Edinburgh, UK (Murray Burn at various flow rates in a 540 m long reach. The only data used in this study were concentration measurements collected at 4 cross-sections, distances between the cross-sections and the injection site, time, as well as flow rate and water velocity, obtained according to the data measured at the 1st and 2nd cross-sections.

    The four main features of concentration versus time profiles at a particular cross-section, namely the peak concentration, the arrival time of the peak at the cross-section, and the shapes of the rising and falling limbs of the profile are modeled, and for each of them a separately designed neural network was used. There was also a variant investigated in which the conservation of the injected mass was assured by adjusting the predicted peak concentration. The neural network methods were compared with the unit peak attenuation curve concept.

    In general the neural networks predicted the main features of the concentration profiles satisfactorily. The predicted peak concentrations were generally better than those obtained using the unit peak attenuation method, and the method with mass

  12. A Reinforcement Learning Algorithm Using Multi-Layer Artificial Neural Networks for Semi-Markov Decision Problems

    Directory of Open Access Journals (Sweden)

    Mustafa Ahmet Beyazıt Ocaktan

    2013-06-01

    Full Text Available Real life problems are generally large-scale and difficult to model. Therefore, these problems can't be mostly solved by classical optimisation methods. This paper presents a reinforcement learning algorithm using a multi-layer artificial neural network to find an approximate solution for large-scale semi Markov decision problems. Performance of the developed algorithm is measured and compared to the classical reinforcement algorithm on a small-scale numerical example. According to results of numerical examples, a number of hidden layer are the key success factors, and average cost of the solution generated by the developed algorithm is approximately equal to that generated by the classical reinforcement algorithm.

  13. Precision requirements for single-layer feed-forward neural networks

    NARCIS (Netherlands)

    Annema, Anne J.; Hoen, K.; Hoen, Klaas; Wallinga, Hans

    1994-01-01

    This paper presents a mathematical analysis of the effect of limited precision analog hardware for weight adaptation to be used in on-chip learning feedforward neural networks. Easy-to-read equations and simple worst-case estimations for the maximum tolerable imprecision are presented. As an

  14. Using Hybrid Algorithm to Improve Intrusion Detection in Multi Layer Feed Forward Neural Networks

    Science.gov (United States)

    Ray, Loye Lynn

    2014-01-01

    The need for detecting malicious behavior on a computer networks continued to be important to maintaining a safe and secure environment. The purpose of this study was to determine the relationship of multilayer feed forward neural network architecture to the ability of detecting abnormal behavior in networks. This involved building, training, and…

  15. Internal-state analysis in layered artificial neural network trained to categorize lung sounds

    NARCIS (Netherlands)

    Oud, M

    2002-01-01

    In regular use of artificial neural networks, only input and output states of the network are known to the user. Weight and bias values can be extracted but are difficult to interpret. We analyzed internal states of networks trained to map asthmatic lung sound spectra onto lung function parameters.

  16. Using Hybrid Algorithm to Improve Intrusion Detection in Multi Layer Feed Forward Neural Networks

    Science.gov (United States)

    Ray, Loye Lynn

    2014-01-01

    The need for detecting malicious behavior on a computer networks continued to be important to maintaining a safe and secure environment. The purpose of this study was to determine the relationship of multilayer feed forward neural network architecture to the ability of detecting abnormal behavior in networks. This involved building, training, and…

  17. Neural stem cell transplantation in a double-layer collagen membrane with unequal pore sizes for spinal cord injur y repair

    Institute of Scientific and Technical Information of China (English)

    Ning Yuan; Wei Tian; Lei Sun; Runying Yuan; Jianfeng Tao; Dafu Chen

    2014-01-01

    A novel double-layer collagen membrane with unequal pore sizes in each layer was designed and tested in this study. The inner, loose layer has about 100-μm-diameter pores, while the outer, compact layer has about 10-μm-diameter pores. In a rat model of incomplete spinal cord injury, a large number of neural stem cells were seeded into the loose layer, which was then adhered to the injured side, and the compact layer was placed against the lateral side. The results showed that the transplantation of neural stem cells in a double-layer collagen membrane with unequal pore sizes promoted the differentiation of neural stem cells, attenuated the pathological lesion, and signiifcantly improved the motor function of the rats with incomplete spinal cord injuries. These experimental ifndings suggest that the transplantation of neural stem cells in a double-lay-er collagen membrane with unequal pore sizes is an effective therapeutic strategy to repair an injured spinal cord.

  18. Artificial neural network modeling of jatropha oil fueled diesel engine for emission predictions

    Directory of Open Access Journals (Sweden)

    Ganapathy Thirunavukkarasu

    2009-01-01

    Full Text Available This paper deals with artificial neural network modeling of diesel engine fueled with jatropha oil to predict the unburned hydrocarbons, smoke, and NOx emissions. The experimental data from the literature have been used as the data base for the proposed neural network model development. For training the networks, the injection timing, injector opening pressure, plunger diameter, and engine load are used as the input layer. The outputs are hydrocarbons, smoke, and NOx emissions. The feed forward back propagation learning algorithms with two hidden layers are used in the networks. For each output a different network is developed with required topology. The artificial neural network models for hydrocarbons, smoke, and NOx emissions gave R2 values of 0.9976, 0.9976, and 0.9984 and mean percent errors of smaller than 2.7603, 4.9524, and 3.1136, respectively, for training data sets, while the R2 values of 0.9904, 0.9904, and 0.9942, and mean percent errors of smaller than 6.5557, 6.1072, and 4.4682, respectively, for testing data sets. The best linear fit of regression to the artificial neural network models of hydrocarbons, smoke, and NOx emissions gave the correlation coefficient values of 0.98, 0.995, and 0.997, respectively.

  19. Predicting Subsurface Soil Layering and Landslide Risk with Artificial Neural Networks

    DEFF Research Database (Denmark)

    Farrokhzad, Farzad; Barari, Amin; Ibsen, Lars Bo

    2011-01-01

    the investigation of study area. The quality of the modeling is further improved by the application of some controlling techniques involved in ANN. Based on the obtained results and considering that the test data were not presented to the network in the training process, it can be stated that the trained neural...... networks are capable of predicting variations in the soil profile and assessing the landslide hazard with an acceptable level of confidence....

  20. Exploiting Hidden Layer Responses of Deep Neural Networks for Language Recognition

    Science.gov (United States)

    2016-09-08

    Moreno, J. Gonzalez-Dominguez, O. Plchot, D. Martinez, J. Gonzalez-Rodriguez, and P. J. Moreno, “ Automatic Language Identification Using Deep Neural...for Language IDentification (LID) involves the extraction of bottleneck features from a network that was trained on auto- matic speech recognition...better in longer segment (10, 30 and 120 second) conditions. In this work we propose a technique to improve frame-by- frame DNN LID. The logistic

  1. Direct and inverse neural networks modelling applied to study the influence of the gas diffusion layer properties on PBI-based PEM fuel cells

    Energy Technology Data Exchange (ETDEWEB)

    Lobato, Justo; Canizares, Pablo; Rodrigo, Manuel A.; Linares, Jose J. [Chemical Engineering Department, University of Castilla-La Mancha, Campus Universitario s/n, 13004 Ciudad Real (Spain); Piuleac, Ciprian-George; Curteanu, Silvia [Faculty of Chemical Engineering and Environmental Protection, Department of Chemical Engineering, ' ' Gh. Asachi' ' Technical University Iasi Bd. D. Mangeron, No. 71A, 700050 IASI (Romania)

    2010-08-15

    This article shows the application of a very useful mathematical tool, artificial neural networks, to predict the fuel cells results (the value of the tortuosity and the cell voltage, at a given current density, and therefore, the power) on the basis of several properties that define a Gas Diffusion Layer: Teflon content, air permeability, porosity, mean pore size, hydrophobia level. Four neural networks types (multilayer perceptron, generalized feedforward network, modular neural network, and Jordan-Elman neural network) have been applied, with a good fitting between the predicted and the experimental values in the polarization curves. A simple feedforward neural network with one hidden layer proved to be an accurate model with good generalization capability (error about 1% in the validation phase). A procedure based on inverse neural network modelling was able to determine, with small errors, the initial conditions leading to imposed values for characteristics of the fuel cell. In addition, the use of this tool has been proved to be very attractive in order to predict the cell performance, and more interestingly, the influence of the properties of the gas diffusion layer on the cell performance, allowing possible enhancements of this material by changing some of its properties. (author)

  2. Crop classification by forward neural network with adaptive chaotic particle swarm optimization.

    Science.gov (United States)

    Zhang, Yudong; Wu, Lenan

    2011-01-01

    This paper proposes a hybrid crop classifier for polarimetric synthetic aperture radar (SAR) images. The feature sets consisted of span image, the H/A/α decomposition, and the gray-level co-occurrence matrix (GLCM) based texture features. Then, the features were reduced by principle component analysis (PCA). Finally, a two-hidden-layer forward neural network (NN) was constructed and trained by adaptive chaotic particle swarm optimization (ACPSO). K-fold cross validation was employed to enhance generation. The experimental results on Flevoland sites demonstrate the superiority of ACPSO to back-propagation (BP), adaptive BP (ABP), momentum BP (MBP), Particle Swarm Optimization (PSO), and Resilient back-propagation (RPROP) methods. Moreover, the computation time for each pixel is only 1.08 × 10(-7) s.

  3. Crop Classification by Forward Neural Network with Adaptive Chaotic Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Yudong Zhang

    2011-05-01

    Full Text Available This paper proposes a hybrid crop classifier for polarimetric synthetic aperture radar (SAR images. The feature sets consisted of span image, the H/A/α decomposition, and the gray-level co-occurrence matrix (GLCM based texture features. Then, the features were reduced by principle component analysis (PCA. Finally, a two-hidden-layer forward neural network (NN was constructed and trained by adaptive chaotic particle swarm optimization (ACPSO. K-fold cross validation was employed to enhance generation. The experimental results on Flevoland sites demonstrate the superiority of ACPSO to back-propagation (BP, adaptive BP (ABP, momentum BP (MBP, Particle Swarm Optimization (PSO, and Resilient back-propagation (RPROP methods. Moreover, the computation time for each pixel is only 1.08 × 10−7 s.

  4. Real-Time Transportation Mode Identification Using Artificial Neural Networks Enhanced with Mode Availability Layers: A Case Study in Dubai

    Directory of Open Access Journals (Sweden)

    Young-Ji Byon

    2017-09-01

    Full Text Available Traditionally, departments of transportation (DOTs have dispatched probe vehicles with dedicated vehicles and drivers for monitoring traffic conditions. Emerging assisted GPS (AGPS and accelerometer-equipped smartphones offer new sources of raw data that arise from voluntarily-traveling smartphone users provided that their modes of transportation can correctly be identified. By introducing additional raster map layers that indicate the availability of each mode, it is possible to enhance the accuracy of mode detection results. Even in its simplest form, an artificial neural network (ANN excels at pattern recognition with a relatively short processing timeframe once it is properly trained, which is suitable for real-time mode identification purposes. Dubai is one of the major cities in the Middle East and offers unique environments, such as a high density of extremely high-rise buildings that may introduce multi-path errors with GPS signals. This paper develops real-time mode identification ANNs enhanced with proposed mode availability geographic information system (GIS layers, firstly for a universal mode detection and, secondly for an auto mode detection for the particular intelligent transportation system (ITS application of traffic monitoring, and compares the results with existing approaches. It is found that ANN-based real-time mode identification, enhanced by mode availability GIS layers, significantly outperforms the existing methods.

  5. UNIVERSAL APPROXIMATION WITH NON-SIGMOID HIDDEN LAYER ACTIVATION FUNCTIONS BY USING ARTIFICIAL NEURAL NETWORK MODELING

    Directory of Open Access Journals (Sweden)

    R. Murugadoss

    2014-10-01

    Full Text Available Neural networks are modeled on the way the human brain. They are capable of learning and can automatically recognize by skillfully training and design complex relationships and hidden dependencies based on historical example patterns and use this information for forecasting. The main difference, and at the same time is biggest advantage of the model of neural networks over statistical techniques seen that the forecaster the exact functional structure between input and Output variables need not be specified, but this by the system with certain Learning algorithms is "learned" using a kind of threshold logic. Goal of the learning procedure is to define the training phase while those parameters of the network, with Help the network has one of those adequate for the problem behavior. Mathematically, the training phase is an iterative, converging towards a minimum error value process. They identify the processors of the network, minimize the "total error". The currently the most popular and most widely for business applications algorithm is the backpropagation algorithm. This paper opens the black box of Backpropagation networks and makes the optimization process in the network over time and locally comprehensible.

  6. Artificial neural networks for simulating wind effects on sprinkler distribution patterns

    Energy Technology Data Exchange (ETDEWEB)

    Sayyadi, H.; Sadraddini, A. A.; Farsadi Zadeh, D.; Montero, J.

    2012-07-01

    A new approach based on Artificial Neural Networks (ANNs) is presented to simulate the effects of wind on the distribution pattern of a single sprinkler under a center pivot or block irrigation system. Field experiments were performed under various wind conditions (speed and direction). An experimental data from different distribution patterns using a Nelson R3000 Rotator sprinkler have been split into three and used for model training, validation and testing. Parameters affecting the distribution pattern were defined. To find an optimal structure, various networks with different architectures have been trained using an Early Stopping method. The selected structure produced R2 0.929 and RMSE = 6.69 mL for the test subset, consisting of a Multi-Layer Perceptron (MLP) neural network with a backpropagation training algorithm; two hidden layers (twenty neurons in the first hidden layer and six neurons in the second hidden layer) and a tangent-sigmoid transfer function. This optimal network was implemented in MATLAB to develop a model termed ISSP (Intelligent Simulator of Sprinkler Pattern). ISSP uses wind speed and direction as input variables and is able to simulate the distorted distribution pattern from a R3000 Rotator sprinkler with reasonable accuracy (R{sup 2} > 0.935). Results of model evaluation confirm the accuracy and robustness of ANNs for simulation of a single sprinkler distribution pattern under real field conditions. (Author) 41 refs.

  7. Predicting Subsurface Soil Layering and Landslide Risk with Artificial Neural Networks

    DEFF Research Database (Denmark)

    Farrokhzad, Farzad; Barari, Amin; Ibsen, Lars Bo

    2011-01-01

    This paper is concerned principally with the application of ANN model in geotechnical engineering. In particular the application for subsurface soil layering and landslide analysis is discussed in more detail. Three ANN models are trained using the required geotechnical data obtained from...... networks are capable of predicting variations in the soil profile and assessing the landslide hazard with an acceptable level of confidence....

  8. Classification of E-Nose Aroma Data of Four Fruit Types by ABC-Based Neural Network

    Directory of Open Access Journals (Sweden)

    M. Fatih Adak

    2016-02-01

    Full Text Available Electronic nose technology is used in many areas, and frequently in the beverage industry for classification and quality-control purposes. In this study, four different aroma data (strawberry, lemon, cherry, and melon were obtained using a MOSES II electronic nose for the purpose of fruit classification. To improve the performance of the classification, the training phase of the neural network with two hidden layers was optimized using artificial bee colony algorithm (ABC, which is known to be successful in exploration. Test data were given to two different neural networks, each of which were trained separately with backpropagation (BP and ABC, and average test performances were measured as 60% for the artificial neural network trained with BP and 76.39% for the artificial neural network trained with ABC. Training and test phases were repeated 30 times to obtain these average performance measurements. This level of performance shows that the artificial neural network trained with ABC is successful in classifying aroma data.

  9. Prediction of gas hydrate saturation throughout the seismic section in Krishna Godavari basin using multivariate linear regression and multi-layer feed forward neural network approach

    Digital Repository Service at National Institute of Oceanography (India)

    Singh, Y.; Nair, R.R.; Singh, H.; Datta, P.; Jaiswal, P.; Dewangan, P.; Ramprasad, T.

    -Godavari basin. Log prediction process, with uncertainties based on root mean square error properties, was implemented by way of a multi-layer feed forward neural network. The log properties were merged with seismic data by applying a non-linear transform...

  10. Optical implementation of a single-layer finite impulse response neural network

    Science.gov (United States)

    Silveira, Paulo E. X.; Pati, G. S.; Wagner, Kelvin H.

    2000-05-01

    This paper demonstrates a space integrating optical implementation of a single-layer FIRNN. A scrolling spatial light modulator is used for representing the spatio-temporal input plane, while the weights are implemented by the adaptive grating formation in a photorefractive crystal. Differential heterodyning is used for low-noise bipolar output detection and an active stabilization technique using a lock-in amplifier and a piezo-electric actuator is adopted for long term interferometric stability. Simulations and initial experimental results for adaptive sonar broadband beamforming are presented.

  11. Storage capacity and learning algorithms for two-layer neural networks

    Science.gov (United States)

    Engel, A.; Köhler, H. M.; Tschepke, F.; Vollmayr, H.; Zippelius, A.

    1992-05-01

    A two-layer feedforward network of McCulloch-Pitts neurons with N inputs and K hidden units is analyzed for N-->∞ and K finite with respect to its ability to implement p=αN random input-output relations. Special emphasis is put on the case where all hidden units are coupled to the output with the same strength (committee machine) and the receptive fields of the hidden units either enclose all input units (fully connected) or are nonoverlapping (tree structure). The storage capacity is determined generalizing Gardner's treatment [J. Phys. A 21, 257 (1988); Europhys. Lett. 4, 481 (1987)] of the single-layer perceptron. For the treelike architecture, a replica-symmetric calculation yields αc~ √K for a large number K of hidden units. This result violates an upper bound derived by Mitchison and Durbin [Biol. Cybern. 60, 345 (1989)]. One-step replica-symmetry breaking gives lower values of αc. In the fully connected committee machine there are in general correlations among different hidden units. As the limit of capacity is approached, the hidden units are anticorrelated: One hidden unit attempts to learn those patterns which have not been learned by the others. These correlations decrease as 1/K, so that for K-->∞ the capacity per synapse is the same as for the tree architecture, whereas for small K we find a considerable enhancement for the storage per synapse. Numerical simulations were performed to explicitly construct solutions for the tree as well as the fully connected architecture. A learning algorithm is suggested. It is based on the least-action algorithm, which is modified to take advantage of the two-layer structure. The numerical simulations yield capacities p that are slightly more than twice the number of degrees of freedom, while the fully connected net can store relatively more patterns than the tree. Various generalizations are discussed. Variable weights from hidden to output give the same results for the storage capacity as does the committee

  12. Entropy-Based Application Layer DDoS Attack Detection Using Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Khundrakpam Johnson Singh

    2016-10-01

    Full Text Available Distributed denial-of-service (DDoS attack is one of the major threats to the web server. The rapid increase of DDoS attacks on the Internet has clearly pointed out the limitations in current intrusion detection systems or intrusion prevention systems (IDS/IPS, mostly caused by application-layer DDoS attacks. Within this context, the objective of the paper is to detect a DDoS attack using a multilayer perceptron (MLP classification algorithm with genetic algorithm (GA as learning algorithm. In this work, we analyzed the standard EPA-HTTP (environmental protection agency-hypertext transfer protocol dataset and selected the parameters that will be used as input to the classifier model for differentiating the attack from normal profile. The parameters selected are the HTTP GET request count, entropy, and variance for every connection. The proposed model can provide a better accuracy of 98.31%, sensitivity of 0.9962, and specificity of 0.0561 when compared to other traditional classification models.

  13. Daily global solar radiation modelling using multi-layer perceptron neural networks in semi-arid region

    Directory of Open Access Journals (Sweden)

    Mawloud GUERMOUI

    2016-07-01

    Full Text Available Accurate estimation of Daily Global Solar Radiation (DGSR has been a major goal for solar energy application. However, solar radiation measurements are not a simple task for several reasons. In the cases where data are not available, it is very common the use of computational models to estimate the missing data, which are based mainly of the search for relationships between weather variables, such as temperature, humidity, sunshine duration, etc. In this respect, the present study focuses on the development of artificial neural network (ANN model for estimation of daily global solar radiation on horizontal surface in Ghardaia city (South Algeria. In this analysis back-propagation algorithm is applied. Daily mean air temperature, relative humidity and sunshine duration was used as climatic inputs parameters, while the daily global solar radiation (DGSR was the only output of the ANN. We have evaluated Multi-Layer Perceptron (MLP models to estimate DGSR using three year of measurement (2005-2008. It was found that MLP-model based on sunshine duration and mean air temperature give accurate results in term of Mean Absolute Bias Error, Root Mean Square Error, Relative Square Error and Correlation Coefficient. The obtained values of these indicators are 0.67 MJ/m², 1.28 MJ/m², 6.12%and 98.18%, respectively which shows that MLP is highly qualified for DGSR estimation in semi-arid climates.

  14. Liver Tumor Segmentation from MR Images Using 3D Fast Marching Algorithm and Single Hidden Layer Feedforward Neural Network

    Directory of Open Access Journals (Sweden)

    Trong-Ngoc Le

    2016-01-01

    Full Text Available Objective. Our objective is to develop a computerized scheme for liver tumor segmentation in MR images. Materials and Methods. Our proposed scheme consists of four main stages. Firstly, the region of interest (ROI image which contains the liver tumor region in the T1-weighted MR image series was extracted by using seed points. The noise in this ROI image was reduced and the boundaries were enhanced. A 3D fast marching algorithm was applied to generate the initial labeled regions which are considered as teacher regions. A single hidden layer feedforward neural network (SLFN, which was trained by a noniterative algorithm, was employed to classify the unlabeled voxels. Finally, the postprocessing stage was applied to extract and refine the liver tumor boundaries. The liver tumors determined by our scheme were compared with those manually traced by a radiologist, used as the “ground truth.” Results. The study was evaluated on two datasets of 25 tumors from 16 patients. The proposed scheme obtained the mean volumetric overlap error of 27.43% and the mean percentage volume error of 15.73%. The mean of the average surface distance, the root mean square surface distance, and the maximal surface distance were 0.58 mm, 1.20 mm, and 6.29 mm, respectively.

  15. [Multi-layer perceptron neural network based algorithm for simultaneous retrieving temperature and emissivity from hyperspectral FTIR data].

    Science.gov (United States)

    Cheng, Jie; Xiao, Qing; Li, Xiao-Wen; Liu, Qin-Huo; Du, Yong-Ming

    2008-04-01

    The present paper firstly points out the defect of typical temperature and emissivity separation algorithms when dealing with hyperspectral FTIR data: the conventional temperature and emissivity algorithms can not reproduce correct emissivity value when the difference between the ground-leaving radiance and object's blackbody radiation at its true temperature and the instrument random noise are on the same order, and this phenomenon is very prone to occur rence near 714 and 1 250 cm(-1) in the field measurements. In order to settle this defect, a three-layer perceptron neural network has been introduced into the simultaneous inversion of temperature and emissivity from hyperspectral FTIR data. The soil emissivity spectra from the ASTER spectral library were used to produce the training data, the soil emissivity spectra from the MODIS spectral library were used to produce the test data, and the result of network test shows the MLP is robust. Meanwhile, the ISSTES algorithm was used to retrieve the temperature and emissivity form the test data. By comparing the results of MLP and ISSTES, we found the MLP can overcome the disadvantage of typical temperature and emisivity separation, although the rmse of derived emissivity using MLP is lower than the ISSTES as a whole. Hence, the MLP can be regarded as a beneficial complementarity of the typical temperature and emissivity separation.

  16. Identification of spinal deformity classification with total curvature analysis and artificial neural network.

    Science.gov (United States)

    Lin, Hong

    2008-01-01

    In this paper, a multilayer feed-forward, back-propagation (MLFF/BP) artificial neural network (ANN) was implemented to identify the classification patterns of the scoliosis spinal deformity. At the first step, the simplified 3-D spine model was constructed based on the coronal and sagittal X-ray images. The features of the central axis curve of the spinal deformity patterns in 3-D space were extracted by the total curvature analysis. The discrete form of the total curvature, including the curvature and the torsion of the central axis of the simplified 3-D spine model was derived from the difference quotients. The total curvature values of 17 vertebrae from the first thoracic to the fifth lumbar spine formed a Euclidean space of 17 dimensions. The King classification model was tested on this MLFF/BP ANN identification system. The 17 total curvature values were presented to the input layer of MLFF/BP ANN. In the output layer there were five neurons representing five King classification types. A total of 37 spinal deformity patterns from scoliosis patients were selected. These 37 patterns were divided into two groups. The training group had 25 patterns and testing group had 12 patterns. The 25-pattern training group was further divided into five subsets. Based on the definition of King classification system, each subset contained all five King types. The network training was conducted on these five subsets by the hold-out method, one of cross-validation variants, and the early stop method. In each one of the five cross-validation sessions, four subsets were alternatively used for estimation learning and one subset left was used for validation learning. Final network testing was conducted with remaining 12 patterns in testing group after the MLFF/BP ANN was trained by all five subsets in training group. The performance of the neural network was evaluated by comparing between two network topologies, one with one hidden layer and another with two hidden layers. The

  17. Forcast of TEXT plasma disruptions using soft X-rays as input signal in a neural network

    Energy Technology Data Exchange (ETDEWEB)

    Vannucci, A.; Oliveira, K.A.; Tajima, T.

    1998-03-03

    A feed-forward neural network with two hidden layers is used in this work to forecast major and minor disruptive instabilities in TEXT discharges. Using soft X-ray signals as input data, the neural net is trained with one disruptive plasma pulse, and a different disruptive discharge is used for validation. After being properly trained the networks, with the same set of weights. is then used to forecast disruptions in two others different plasma pulses. It is observed that the neural net is able to predict the incoming of a disruption more than 3 ms in advance. This time interval is almost three times longer than the one already obtained previously when magnetic signal from a Mirnov coil was used to feed the neural networks with. To our own eye we fail to see any indication of an upcoming disruption from the experimental data this far back from the time of disruption. Finally, from what we observe in the predictive behavior of our network, speculations are made whether the disruption triggering mechanism would be associated to an increase of the m = 2 magnetic island, that disturbs the central part of the plasma column afterwards or, in face of the results from this work, the initial perturbation would have occurred first in the central part of the plasma column, within the q = 1 magnetic surface, and then the m = 2 MHD mode would be destabilized afterwards.

  18. Image analysis and multi-layer perceptron artificial neural networks for the discrimination between benign and malignant endometrial lesions.

    Science.gov (United States)

    Makris, Georgios-Marios; Pouliakis, Abraham; Siristatidis, Charalampos; Margari, Niki; Terzakis, Emmanouil; Koureas, Nikolaos; Pergialiotis, Vasilios; Papantoniou, Nikolaos; Karakitsos, Petros

    2017-03-01

    This study aims to investigate the efficacy of an Artificial Neural Network based on Multi-Layer Perceptron (ANN-MPL) to discriminate between benign and malignant endometrial nuclei and lesions in cytological specimens. We collected 416 histologically confirmed liquid-based cytological smears from 168 healthy patients, 152 patients with malignancy, 52 with hyperplasia without atypia, 20 with hyperplasia with atypia, and 24 patients with endometrial polyps. The morphometric characteristics of 90 nuclei per case were analyzed using a custom image analysis system; half of them were used to train the MPL-ANN model, which classified each nucleus as benign or malignant. Data from the remaining 50% of cases were used to evaluate the performance and stability of the ANN. The MLP-ANN for the nuclei classification (numeric and percentage classifiers) and the algorithms for the determination of the optimum threshold values were estimated with in-house developed software for the MATLAB v2011b programming environment; the diagnostic accuracy measures were also calculated. The accuracy of the MPL-ANN model for the classification of endometrial nuclei was 81.33%, while specificity was 88.84% and sensitivity 69.38%. For the case classification based on numeric classifier the overall accuracy was 90.87%, the specificity 93.03% and the sensitivity 87.79%; the indices for the percentage classifier were 95.91%, 93.44%, and 99.42%, respectively. Computerized systems based on ANNs can aid the cytological classification of endometrial nuclei and lesions with sufficient sensitivity and specificity. Diagn. Cytopathol. 2017;45:202-211. © 2016 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  19. Classification of Atrial Septal Defect and Ventricular Septal Defect with Documented Hemodynamic Parameters via Cardiac Catheterization by Genetic Algorithms and Multi-Layered Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Mustafa Yıldız

    2012-08-01

    Full Text Available Introduction: We aimed to develop a classification method to discriminate ventricular septal defect and atrial septal defect by using severalhemodynamic parameters.Patients and Methods: Forty three patients (30 atrial septal defect, 13 ventricular septal defect; 26 female, 17 male with documentedhemodynamic parameters via cardiac catheterization are included to study. Such parameters as blood pressure values of different areas,gender, age and Qp/Qs ratios are used for classification. Parameters, we used in classification are determined by divergence analysismethod. Those parameters are; i pulmonary artery diastolic pressure, ii Qp/Qs ratio, iii right atrium pressure, iv age, v pulmonary arterysystolic pressure, vi left ventricular sistolic pressure, vii aorta mean pressure, viii left ventricular diastolic pressure, ix aorta diastolicpressure, x aorta systolic pressure. Those parameters detected from our study population, are uploaded to multi-layered artificial neuralnetwork and the network was trained by genetic algorithm.Results: Trained cluster consists of 14 factors (7 atrial septal defect and 7 ventricular septal defect. Overall success ratio is 79.2%, andwith a proper instruction of artificial neural network this ratio increases up to 89%.Conclusion: Parameters, belonging to artificial neural network, which are needed to be detected by the investigator in classical methods,can easily be detected with the help of genetic algorithms. During the instruction of artificial neural network by genetic algorithms, boththe topology of network and factors of network can be determined. During the test stage, elements, not included in instruction cluster, areassumed as in test cluster, and as a result of this study, we observed that multi-layered artificial neural network can be instructed properly,and neural network is a successful method for aimed classification.

  20. Increasing spatial resolution of CHIRPS rainfall datasets for Cyprus with artificial neural networks

    Science.gov (United States)

    Tymvios, Filippos; Michaelides, Silas; Retalis, Adrianos; Katsanos, Dimitrios; Lelieveld, Jos

    2016-08-01

    The use of high resolution rainfall datasets is an alternative way of studying climatological regions where conventional rain measurements are sparse or not available. Starting in 1981 to near-present, the CHIRPS (Climate Hazards Group InfraRed Precipitation with Station data) dataset incorporates a 5km×5km resolution satellite imagery with in-situ station data to create gridded rainfall time series for trend analysis, severe events and seasonal drought monitoring. The aim of this work is to further increase the resolution of the rainfall dataset for Cyprus to 1km×1km, by correlating the CHIRPS dataset with elevation information, the NDVI index (Normalized Difference Vegetation Index) from satellite images at 1km×1km and precipitation measurements from the official raingauge network of the Cyprus' Department of Meteorology, utilizing Artificial Neural Networks. The Artificial Neural Networks' architecture that was implemented is the Multi-Layer Perceptron (MLP) trained with the back propagation method, which is widely used in environmental studies. Seven different network architectures were tested, all with two hidden layers. The number of neurons ranged from 3 to10 in the first hidden layer and from 5 to 25 in the second hidden layer. The dataset was separated into a randomly selected training set, a validation set and a testing set; the latter is independently used for the final assessment of the models' performance. Using the Artificial Neural Network approach, a new map of the spatial analysis of rainfall is constructed which exhibits a considerable increase in its spatial resolution. A statistical assessment of the new spatial analysis was made using the rainfall ground measurements from the raingauge network. The assessment indicates that the methodology is promising for several applications.

  1. Improving the prediction accuracy of residue solvent accessibility and real-value backbone torsion angles of proteins by guided-learning through a two-layer neural network.

    Science.gov (United States)

    Faraggi, Eshel; Xue, Bin; Zhou, Yaoqi

    2009-03-01

    This article attempts to increase the prediction accuracy of residue solvent accessibility and real-value backbone torsion angles of proteins through improved learning. Most methods developed for improving the backpropagation algorithm of artificial neural networks are limited to small neural networks. Here, we introduce a guided-learning method suitable for networks of any size. The method employs a part of the weights for guiding and the other part for training and optimization. We demonstrate this technique by predicting residue solvent accessibility and real-value backbone torsion angles of proteins. In this application, the guiding factor is designed to satisfy the intuitive condition that for most residues, the contribution of a residue to the structural properties of another residue is smaller for greater separation in the protein-sequence distance between the two residues. We show that the guided-learning method makes a 2-4% reduction in 10-fold cross-validated mean absolute errors (MAE) for predicting residue solvent accessibility and backbone torsion angles, regardless of the size of database, the number of hidden layers and the size of input windows. This together with introduction of two-layer neural network with a bipolar activation function leads to a new method that has a MAE of 0.11 for residue solvent accessibility, 36 degrees for psi, and 22 degrees for phi. The method is available as a Real-SPINE 3.0 server in http://sparks.informatics.iupui.edu.

  2. Prediction of translation initiation sites in human mRNA sequences with AUG start codon in weak Kozak context: A neural network approach.

    Science.gov (United States)

    Tikole, Suhas; Sankararamakrishnan, Ramasubbu

    2008-05-16

    Translation of eukaryotic mRNAs is often regulated by nucleotides around the start codon. A purine at position -3 and a guanine at position +4 contribute significantly to enhance the translation efficiency. Algorithms to predict the translation initiation site often fail to predict the start site if the sequence context is not present. We have developed a neural network method to predict the initiation site of mRNA sequences that lack the preferred nucleotides at the positions -3 and +4 surrounding the translation initiation site. Neural networks of various architectures comprising different number of hidden layers were designed and tested for various sizes of windows of nucleotides surrounding translation initiation sites. We found that the neural network with two hidden layers showed a sensitivity of 83% and specificity of 73% indicating a vastly improved performance in successfully predicting the translation initiation site of mRNA sequences with weak Kozak context. WeakAUG server is freely available at http://bioinfo.iitk.ac.in/AUGPred/.

  3. ZnO/Mg-Al Layered Double Hydroxides as a Photocatalytic Bleaching of Methylene Orange - A Black Box Modeling by Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Seyed Ali Hosseini

    2016-10-01

    Full Text Available The paper reports the development of ZnO-MgAl layered double hydroxides as an adsorbent-photo catalyst to remove the dye pollutants from aqueous solution and the experiments of a photocatalytic study were designed and modeled by response surface methodology (RSM and artificial neural network (ANN. The co-precipitation and urea methods were used to synthesize the ZnO-MgAl layered double hydroxides and FT-IR, XRD and SEM analysis were done for characterization of the catalyst.The performance of the ANN model was determined and showed the efficiency of the model in comparison to the RSM method to predict the percentage of dye removal accurately with a determination coefficient (R2 of 0.968. The optimized conditions were obtained as follows: 600 oC, 120 min, 0.05 g and 20 ppm for the calcination temperature, irradiation time, catalyst amount and dye pollutant concentration, respectively. Copyright © 2016 BCREC GROUP. All rights reserved Received: 22nd January 2016; Revised: 14th March 2016; Accepted:15th March 2016 How to Cite: Hosseini, S.A., Akbari, M. (2016. ZnO/Mg-Al Layered Double Hydroxides as a Photocatalytic Bleaching of Methylene Orange - A Black Box Modeling by Artificial Neural Network. Bulletin of Chemical Reaction Engineering & Catalysis, 11 (3: 299-315 (doi: 10.9767/bcrec.11.3.570.299-315 Permalink/DOI: http://doi.org/10.9767/bcrec.11.3.570.299-315

  4. Pharyngeal wall vibration detection using an artificial neural network.

    Science.gov (United States)

    Behbehani, K; Lopez, F; Yen, F C; Lucas, E A; Burk, J R; Axe, J P; Kamangar, F

    1997-05-01

    An artificial-neural-network-based detector of pharyngeal wall vibration (PWV) is presented. PWV signals the imminent occurrence of obstructive sleep apnoea (OSA) in adults who suffer from OSA syndrome. Automated detection of PWV is very important in enhancing continuous positive airway pressure (CPAP) therapy by allowing automatic adjustment of the applied airway pressure by a procedure called automatic positive airway pressure (APAP) therapy. A network with 15 inputs, one output, and two hidden layers, each with two Adaline-nodes, is used as part of a PWV detection scheme. The network is initially trained using nasal mask pressure data from five positively diagnosed OSA patients. The performance of the ANN-based detector is evaluated using data from five different OSA patients. The results show that on the average it correctly detects the presence of PWV events at a rate of approximately 92% and correctly distinguishes normal breaths approximately 98% of the time. Further, the ANN-based detector accuracy is not affected by the pressure level required for therapy.

  5. Single Layer Recurrent Neural Network for detection of swarm-like earthquakes in W-Bohemia/Vogtland-the method

    Science.gov (United States)

    Doubravová, Jana; Wiszniowski, Jan; Horálek, Josef

    2016-08-01

    In this paper, we present a new method of local event detection of swarm-like earthquakes based on neural networks. The proposed algorithm uses unique neural network architecture. It combines features used in other neural network concepts such as the Real Time Recurrent Network and Nonlinear Autoregressive Neural Network to achieve good performance of detection. We use the recurrence combined with various delays applied to recurrent inputs so the network remembers history of many samples. This method has been tested on data from a local seismic network in West Bohemia with promising results. We found that phases not picked in training data diminish the detection capability of the neural network and proper preparation of training data is therefore fundamental. To train the network we define a parameter called the learning importance weight of events and show that it affects the number of acceptable solutions achieved by many trials of the Back Propagation Through Time algorithm. We also compare the individual training of stations with training all of them simultaneously, and we conclude that results of joint training are better for some stations than training only one station.

  6. 基于双层卷积神经网络的步态识别算法%Gait recognition based on double-layer convolutional neural networks

    Institute of Scientific and Technical Information of China (English)

    王欣; 唐俊; 王年

    2015-01-01

    This paper proposed an algorithm of gait recognition using double‐layer convolutional neural networks ( D‐CNN ) and plantar pressure image . Firstly , the preprocessing of the evaluated images from the plantar pressure test system was implemented .Secondly ,convolution features were learned from single and double layer of convolutional neural network model . Finally ,convolution features were used to train the SVM classifiers and obtain the classification results .The experimental results demonstrated the effectiveness of the proposed method .%提出运用双层卷积神经网络模型实现基于足底压力图像的步态识别方法。首先,对足底压力数据采集系统采集的图像作相应预处理;然后,用双层卷积神经网络模型学习得到足底压力图像的单层和双层卷积特征;最后,将卷积特征训练分类器得到分类结果。实验结果验证了该算法的有效性。

  7. "Paradox of slow frequencies" - Are slow frequencies in upper cortical layers a neural predisposition of the level/state of consciousness (NPC)?

    Science.gov (United States)

    Northoff, Georg

    2017-09-01

    Consciousness research has much focused on faster frequencies like alpha or gamma while neglecting the slower ones in the infraslow (0.001-0.1Hz) and slow (0.1-1Hz) frequency range. These slower frequency ranges have a "bad reputation" though; their increase in power can observed during the loss of consciousness as in sleep, anesthesia, and vegetative state. However, at the same time, slower frequencies have been conceived instrumental for consciousness. The present paper aims to resolve this paradox which I describe as "paradox of slow frequencies". I first show various data that suggest a central role of slower frequencies in integrating faster ones, i.e., "temporo-spatial integration and nestedness". Such "temporo-spatial integration and nestedness" is disrupted during the loss of consciousness as in anesthesia and sleep leading to "temporo-spatial fragmentation and isolation" between slow and fast frequencies. Slow frequencies are supposedly mediated by neural activity in upper cortical layers in higher-order associative regions as distinguished from lower cortical layers that are related to faster frequencies. Taken together, slower and faster frequencies take on different roles for the level/state of consciousness. Faster frequencies by themselves are sufficient and thus a neural correlate of consciousness (NCC) while slower frequencies are a necessary non-sufficient condition of possible consciousness, e.g., a neural predisposition of the level/state of consciousness (NPC). This resolves the "paradox of slow frequencies" in that it assigns different roles to slower and faster frequencies in consciousness, i.e., NCC and NPC. Taken as NCC and NPC, fast and slow frequencies including their relation as in "temporo-spatial integration and nestedness" can be considered a first "building bloc" of a future "temporo-spatial theory of consciousness" (TTC) (Northoff, 2013; Northoff, 2014b; Northoff & Huang, 2017). Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Layer-specific entrainment of gamma-band neural activity by the alpha rhythm in monkey visual cortex

    NARCIS (Netherlands)

    Spaak, E.; Bonnefond, M.; Maier, A.; Leopold, D.A.; Jensen, O.

    2012-01-01

    Although the mammalian neocortex has a clear laminar organization, layer-specific neuronal computations remain to be uncovered. Several studies suggest that gamma band activity in primary visual cortex (V1) is produced in granular and superficial layers and is associated with the processing of visua

  9. High serotonin levels during brain development alter the structural input-output connectivity of neural networks in the rat somatosensory layer IV

    Directory of Open Access Journals (Sweden)

    Stéphanie eMiceli

    2013-06-01

    Full Text Available Homeostatic regulation of serotonin (5-HT concentration is critical for normal topographical organization and development of thalamocortical (TC afferent circuits. Down-regulation of the serotonin transporter (SERT and the consequent impaired reuptake of 5-HT at the synapse, results in a reduced terminal branching of developing TC afferents within the primary somatosensory cortex (S1. Despite the presence of multiple genetic models, the effect of high extracellular 5-HT levels on the structure and function of developing intracortical neural networks is far from being understood. Here, using juvenile SERT knockout (SERT-/- rats we investigated, in vitro, the effect of increased 5-HT levels on the structural organization of (i the thalamocortical projections of the ventroposteromedial thalamic nucleus towards S1, (ii the general barrel-field pattern and (iii the electrophysiological and morphological properties of the excitatory cell population in layer IV of S1 (spiny stellate and pyramidal cells. Our results confirmed previous findings that high levels of 5-HT during development lead to a reduction of the topographical precision of TCA projections towards the barrel cortex. Also, the barrel pattern was altered but not abolished in SERT-/- rats. In layer IV, both excitatory spiny stellate and pyramidal cells showed a significantly reduced intracolumnar organization of their axonal projections. In addition, the layer IV spiny stellate cells gave rise to a prominent projection towards the infragranular layer Vb. Our findings point to a structural and functional reorganization, of TCAs, as well as early stage intracortical microcircuitry, following the disruption of 5-HT reuptake during critical developmental periods. The increased projection pattern of the layer IV neurons suggests that the intracortical network changes are not limited to the main entry layer IV but may also affect the subsequent stages of the canonical circuits of the barrel

  10. Development of a multi-layer perceptron artificial neural network model to determine haul trucks energy consumption

    Institute of Scientific and Technical Information of China (English)

    Soofastaei Ali; Aminossadati Saiied M.; Arefi Mohammad M.; Kizil Mehmet S.

    2016-01-01

    The mining industry annually consumes trillions of British thermal units of energy, a large part of which is saveable. Diesel fuel is a significant source of energy in surface mining operations and haul trucks are the major users of this energy source. Gross vehicle weight, truck velocity and total resistance have been recognised as the key parameters affecting the fuel consumption. In this paper, an artificial neural net-work model was developed to predict the fuel consumption of haul trucks in surface mines based on the gross vehicle weight, truck velocity and total resistance. The network was trained and tested using real data collected from a surface mining operation. The results indicate that the artificial neural network modelling can accurately predict haul truck fuel consumption based on the values of the haulage param-eters considered in this study.

  11. An optimized recursive learning algorithm for three-layer feedforward neural networks for mimo nonlinear system identifications

    OpenAIRE

    2010-01-01

    Back-propagation with gradient method is the most popular learning algorithm for feed-forward neural networks. However, it is critical to determine a proper fixed learning rate for the algorithm. In this paper, an optimized recursive algorithm is presented for online learning based on matrix operation and optimization methods analytically, which can avoid the trouble to select a proper learning rate for the gradient method. The proof of weak convergence of the proposed algorithm also is given...

  12. Layer-specific entrainment of γ-band neural activity by the α rhythm in monkey visual cortex.

    Science.gov (United States)

    Spaak, Eelke; Bonnefond, Mathilde; Maier, Alexander; Leopold, David A; Jensen, Ole

    2012-12-18

    Although the mammalian neocortex has a clear laminar organization, layer-specific neuronal computations remain to be uncovered. Several studies suggest that gamma band activity in primary visual cortex (V1) is produced in granular and superficial layers and is associated with the processing of visual input. Oscillatory alpha band activity in deeper layers has been proposed to modulate neuronal excitability associated with changes in arousal and cognitive factors. To investigate the layer-specific interplay between these two phenomena, we characterized the coupling between alpha and gamma band activity of the local field potential in V1 of the awake macaque. Using multicontact laminar electrodes to measure spontaneous signals simultaneously from all layers of V1, we found a robust coupling between alpha phase in the deeper layers and gamma amplitude in granular and superficial layers. Moreover, the power in the two frequency bands was anticorrelated. Taken together, these findings demonstrate robust interlaminar cross-frequency coupling in the visual cortex, supporting the view that neuronal activity in the alpha frequency range phasically modulates processing in the cortical microcircuit in a top-down manner.

  13. Relationship between fatigue life of asphalt concrete and polypropylene/polyester fibers using artificial neural network and genetic algorithm

    Institute of Scientific and Technical Information of China (English)

    Morteza Vadood; Majid Safar Johari; Ali Reza Rahai

    2015-01-01

    While various kinds of fibers are used to improve the hot mix asphalt (HMA) performance, a few works have been undertaken on the hybrid fiber-reinforced HMA. Therefore, the fatigue life of modified HMA samples using polypropylene and polyester fibers was evaluated and two models namely regression and artificial neural network (ANN) were used to predict the fatigue life based on the fibers parameters. As ANN contains many parameters such as the number of hidden layers which directly influence the prediction accuracy, genetic algorithm (GA) was used to solve optimization problem for ANN. Moreover, the trial and error method was used to optimize the GA parameters such as the population size. The comparison of the results obtained from regression and optimized ANN with GA shows that the two-hidden-layer ANN with two and five neurons in the first and second hidden layers, respectively, can predict the fatigue life of fiber-reinforced HMA with high accuracy (correlation coefficient of 0.96).

  14. Generalized classifier neural network.

    Science.gov (United States)

    Ozyildirim, Buse Melis; Avci, Mutlu

    2013-03-01

    In this work a new radial basis function based classification neural network named as generalized classifier neural network, is proposed. The proposed generalized classifier neural network has five layers, unlike other radial basis function based neural networks such as generalized regression neural network and probabilistic neural network. They are input, pattern, summation, normalization and output layers. In addition to topological difference, the proposed neural network has gradient descent based optimization of smoothing parameter approach and diverge effect term added calculation improvements. Diverge effect term is an improvement on summation layer calculation to supply additional separation ability and flexibility. Performance of generalized classifier neural network is compared with that of the probabilistic neural network, multilayer perceptron algorithm and radial basis function neural network on 9 different data sets and with that of generalized regression neural network on 3 different data sets include only two classes in MATLAB environment. Better classification performance up to %89 is observed. Improved classification performances proved the effectivity of the proposed neural network.

  15. Segmentation of Textures Defined on Flat vs. Layered Surfaces using Neural Networks: Comparison of 2D vs. 3D Representations.

    Science.gov (United States)

    Oh, Sejong; Choe, Yoonsuck

    2007-08-01

    Texture boundary detection (or segmentation) is an important capability in human vision. Usually, texture segmentation is viewed as a 2D problem, as the definition of the problem itself assumes a 2D substrate. However, an interesting hypothesis emerges when we ask a question regarding the nature of textures: What are textures, and why did the ability to discriminate texture evolve or develop? A possible answer to this question is that textures naturally define physically distinct (i.e., occluded) surfaces. Hence, we can hypothesize that 2D texture segmentation may be an outgrowth of the ability to discriminate surfaces in 3D. In this paper, we conducted computational experiments with artificial neural networks to investigate the relative difficulty of learning to segment textures defined on flat 2D surfaces vs. those in 3D configurations where the boundaries are defined by occluding surfaces and their change over time due to the observer's motion. It turns out that learning is faster and more accurate in 3D, very much in line with our expectation. Furthermore, our results showed that the neural network's learned ability to segment texture in 3D transfers well into 2D texture segmentation, bolstering our initial hypothesis, and providing insights on the possible developmental origin of 2D texture segmentation function in human vision.

  16. Recruitment and Consolidation of Cell Assemblies for Words by Way of Hebbian Learning and Competition in a Multi-Layer Neural Network.

    Science.gov (United States)

    Garagnani, Max; Wennekers, Thomas; Pulvermüller, Friedemann

    2009-06-01

    Current cognitive theories postulate either localist representations of knowledge or fully overlapping, distributed ones. We use a connectionist model that closely replicates known anatomical properties of the cerebral cortex and neurophysiological principles to show that Hebbian learning in a multi-layer neural network leads to memory traces (cell assemblies) that are both distributed and anatomically distinct. Taking the example of word learning based on action-perception correlation, we document mechanisms underlying the emergence of these assemblies, especially (i) the recruitment of neurons and consolidation of connections defining the kernel of the assembly along with (ii) the pruning of the cell assembly's halo (consisting of very weakly connected cells). We found that, whereas a learning rule mapping covariance led to significant overlap and merging of assemblies, a neurobiologically grounded synaptic plasticity rule with fixed LTP/LTD thresholds produced minimal overlap and prevented merging, exhibiting competitive learning behaviour. Our results are discussed in light of current theories of language and memory. As simulations with neurobiologically realistic neural networks demonstrate here spontaneous emergence of lexical representations that are both cortically dispersed and anatomically distinct, both localist and distributed cognitive accounts receive partial support.

  17. Estimation of MHD boundary layer slip flow over a permeable stretching cylinder in the presence of chemical reaction through numerical and artificial neural network modeling

    Directory of Open Access Journals (Sweden)

    P. Bala Anki Reddy

    2016-09-01

    Full Text Available In this paper, the prediction of the magnetohydrodynamic boundary layer slip flow over a permeable stretched cylinder with chemical reaction is investigated by using some mathematical techniques, namely Runge–Kutta fourth order method along with shooting technique and artificial neural network (ANN. A numerical method is implemented to approximate the flow of heat and mass transfer characteristics as a function of some input parameters, explicitly the curvature parameter, magnetic parameter, permeability parameter, velocity slip, Grashof number, solutal Grashof number, Prandtl number, temperature exponent, Schmidt number, concentration exponent and chemical reaction parameter. The non-linear partial differential equations of the governing flow are converted into a system of highly non-linear ordinary differential equations by using the suitable similarity transformations, which are then solved numerically by a Runge–Kutta fourth order along with shooting technique and then ANN is applied to them. The Back Propagation Neural Network is applied for forecasting the desired outputs. The reported numerical values and the ANN values are in good agreement than those published works on various special cases. According to the findings of this study, the ANN approach is reliable, effective and easily applicable for simulating heat and mass transfer flow over a stretched cylinder.

  18. A Kind of Second-Order Learning Algorithm Based on Generalized Cost Criteria in Multi-Layer Feed-Forward Neural Networks

    Institute of Scientific and Technical Information of China (English)

    张长江; 付梦印; 金梅

    2003-01-01

    A kind of second-order algorithm--recursive approximate Newton algorithm was given by Karayiannis. The algorithm was simplified when it was formulated. Especially, the simplification to matrix Hessian was very reluctant, which led to the loss of valuable information and affected performance of the algorithm to certain extent. For multi-layer feed-forward neural networks, the second-order back-propagation recursive algorithm based generalized cost criteria was proposed. It is proved that it is equivalent to Newton recursive algorithm and has a second-order convergent rate. The performance and application prospect are analyzed. Lots of simulation experiments indicate that the calculation of the new algorithm is almost equivalent to the recursive least square multiple algorithm. The algorithm and selection of networks parameters are significant and the performance is more excellent than BP algorithm and the second-order learning algorithm that was given by Karayiannis.

  19. Neural dynamics in a model of the thalamocortical system. I. Layers, loops and the emergence of fast synchronous rhythms.

    Science.gov (United States)

    Lumer, E D; Edelman, G M; Tononi, G

    1997-01-01

    A large-scale computer model was constructed to gain insight into the structural basis for the generation of fast synchronous rhythms (20-60 Hz) in the thalamocortical system. The model consisted of 65,000 spiking neurons organized topographically to represent sectors of a primary and secondary area of mammalian visual cortex, and two associated regions of the dorsal thalamus and the thalamic reticular nucleus. Cortical neurons, both excitatory and inhibitory, were organized in supragranular layers, infraganular layers and layer IV. Reciprocal intra- and interlaminar, interareal, thalamocortical, corticothalamic and thalamoreticular connections were set up based on known anatomical constraints. Simulations of neuronal responses to visual input revealed sporadic epochs of synchronous oscillations involving all levels of the model, similar to the fast rhythms recorded in vivo. By systematically modifying physiological and structural parameters in the model, specific network properties were found to play a major role in the generation of this rhythmic activity. For example, fast synchronous rhythms could be sustained autonomously by lateral and interlaminar interactions within and among local cortical circuits. In addition, these oscillations were propagated to the thalamus and amplified by corticothalamocortical loops, including the thalamic reticular complex. Finally, synchronous oscillations were differentially affected by lesioning forward and backward interareal connections.

  20. Morphological neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Ritter, G.X.; Sussner, P. [Univ. of Florida, Gainesville, FL (United States)

    1996-12-31

    The theory of artificial neural networks has been successfully applied to a wide variety of pattern recognition problems. In this theory, the first step in computing the next state of a neuron or in performing the next layer neural network computation involves the linear operation of multiplying neural values by their synaptic strengths and adding the results. Thresholding usually follows the linear operation in order to provide for nonlinearity of the network. In this paper we introduce a novel class of neural networks, called morphological neural networks, in which the operations of multiplication and addition are replaced by addition and maximum (or minimum), respectively. By taking the maximum (or minimum) of sums instead of the sum of products, morphological network computation is nonlinear before thresholding. As a consequence, the properties of morphological neural networks are drastically different than those of traditional neural network models. In this paper we consider some of these differences and provide some particular examples of morphological neural network.

  1. Neural Networks

    Directory of Open Access Journals (Sweden)

    Schwindling Jerome

    2010-04-01

    Full Text Available This course presents an overview of the concepts of the neural networks and their aplication in the framework of High energy physics analyses. After a brief introduction on the concept of neural networks, the concept is explained in the frame of neuro-biology, introducing the concept of multi-layer perceptron, learning and their use as data classifer. The concept is then presented in a second part using in more details the mathematical approach focussing on typical use cases faced in particle physics. Finally, the last part presents the best way to use such statistical tools in view of event classifers, putting the emphasis on the setup of the multi-layer perceptron. The full article (15 p. corresponding to this lecture is written in french and is provided in the proceedings of the book SOS 2008.

  2. Experiments and shape prediction of plasma deposit layer using artificial neural network%基于人工神经网络的等离子熔积层形貌试验研究

    Institute of Scientific and Technical Information of China (English)

    徐继彭; 林柳兰; 胡庆夕; 方明伦

    2006-01-01

    Plasma surfacing is an important enabling technology in high-performance coating applications. Recently, it is applied to rapid prototyping/tooling to reduce development time and manufacturing cost for the development of new products. However, this technology is in its infancy, it is essential to understand clearly how process variables relate to deposit microstructure and properties for plasma deposition manufacturing process control. In this paper, layer appearance of single surfacing under different parameters such as plasma current, voltage, powder feedrate and travel speed is studied. Back-propagation neural networks are used to associate the depositing process variables with the features of the deposit layer shape. These networks can be effectively implemented to estimate the layer shape. The results indicate that neural networks can yield fairly accurate results and can be used as a practical tool in plasma deposition manufacturing process.

  3. The Tidal Tails of Globular Cluster Palomar 5 Based on Neural Networks Method

    CERN Document Server

    Zou, H; Ma, J; Zhou, X

    2009-01-01

    The Sixth Data Release (DR6) in the Sloan Digital Sky Survey (SDSS) provides more photometric regions, new features and more accurate data around globular cluster Palomar 5. A new method, Back Propagation Neural Network (BPNN), is used to estimate the probability of cluster member to detect its tidal tails. Cluster and field stars, used for training the networks, are extracted over a $40\\times20$ deg$^2$ field by color-magnitude diagrams (CMDs). The best BPNNs with two hidden layers and Levenberg-Marquardt (LM) training algorithm are determined by the chosen cluster and field samples. The membership probabilities of stars in the whole field are obtained with the BPNNs, and contour maps of the probability distribution show that a tail extends $5.42\\dg$ to the north of the cluster and a tail extends $3.77\\dg$ to the south. The whole tails are similar to those detected by \\citet{od03}, but no longer debris of the cluster is found to the northeast of the sky. The radial density profiles are investigated both alon...

  4. Development of Artificial Neural-Network-Based Models for the Simulation of Spring Discharge

    Directory of Open Access Journals (Sweden)

    M. Mohan Raju

    2011-01-01

    Full Text Available The present study demonstrates the application of artificial neural networks (ANNs in predicting the weekly spring discharge. The study was based on the weekly spring discharge from a spring located near Ranichauri in Tehri Garhwal district of Uttarakhand, India. Five models were developed for predicting the spring discharge based on a weekly interval using rainfall, evaporation, temperature with a specified lag time. All models were developed both with one and two hidden layers. Each model was developed with many trials by selecting different network architectures and different number of hidden neurons; finally a best predicting model presented against each developed model. The models were trained with three different algorithms, that is, quick-propagation algorithm, batch backpropagation algorithm, and Levenberg-Marquardt algorithm using weekly data from 1999 to 2005. A best model for the simulation was selected from the three presented algorithms using the statistical criteria such as correlation coefficient (, determination coefficient, or Nash Sutcliff's efficiency (DC. Finally, optimized number of neurons were considered for the best model. Training and testing results revealed that the models were predicting the weekly spring discharge satisfactorily. Based on these criteria, ANN-based model results in better agreement for the computation of spring discharge. LMR models were also developed in the study, and they also gave good results, but, when compared with the ANN methodology, ANN resulted in better optimized values.

  5. Effect of Feature Extraction on Automatic Sleep Stage Classification by Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Prucnal Monika

    2017-06-01

    Full Text Available EEG signal-based sleep stage classification facilitates an initial diagnosis of sleep disorders. The aim of this study was to compare the efficiency of three methods for feature extraction: power spectral density (PSD, discrete wavelet transform (DWT and empirical mode decomposition (EMD in the automatic classification of sleep stages by an artificial neural network (ANN. 13650 30-second EEG epochs from the PhysioNet database, representing five sleep stages (W, N1-N3 and REM, were transformed into feature vectors using the aforementioned methods and principal component analysis (PCA. Three feed-forward ANNs with the same optimal structure (12 input neurons, 23 + 22 neurons in two hidden layers and 5 output neurons were trained using three sets of features, obtained with one of the compared methods each. Calculating PSD from EEG epochs in frequency sub-bands corresponding to the brain waves (81.1% accuracy for the testing set, comparing with 74.2% for DWT and 57.6% for EMD appeared to be the most effective feature extraction method in the analysed problem.

  6. Predicting methionine and lysine contents in soybean meal and fish meal using a group method of data handling-type neural network

    Energy Technology Data Exchange (ETDEWEB)

    Mottaghitalab, M.; Nikkhah, N.; Darmani-Kuhi, H.; López, S.; France, J.

    2015-07-01

    Artificial neural network models offer an alternative to linear regression analysis for predicting the amino acid content of feeds from their chemical composition. A group method of data handling-type neural network (GMDH-type NN), with an evolutionary method of genetic algorithm, was used to predict methionine (Met) and lysine (Lys) contents of soybean meal (SBM) and fish meal (FM) from their proximate analyses (i.e. crude protein, crude fat, crude fibre, ash and moisture). A data set with 119 data lines for Met and 116 lines for Lys was used to develop GMDH-type NN models with two hidden layers. The data lines were divided into two groups to produce training and validation sets. The data sets were imported into the GEvoM software for training the networks. The predictive capability of the constructed models was evaluated by their abilities to estimate the validation data sets accurately. A quantitative examination of goodness of fit for the predictive models was made using a number of precision, concordance and bias statistics. The statistical performance of the models developed revealed close agreement between observed and predicted Met and Lys contents for SBM and FM. The results of this study clearly illustrate the validity of GMDH-type NN models to estimate accurately the amino acid content of poultry feed ingredients from their chemical composition . (Author)

  7. Rapid detection of six phosphodiesterase type 5 enzyme inhibitors in healthcare products using thin-layer chromatography and surface enhanced Raman spectroscopy combined with BP neural network.

    Science.gov (United States)

    Hu, Xiaopeng; Fang, Guozhen; Han, Ailing; Fu, Yunpeng; Tong, Rui; Wang, Shuo

    2017-06-01

    A novel facile method for the detection of the phosphodiesterase type 5 enzyme inhibitors added illegally into health products was established using thin-layer chromatography and surface enhanced Raman spectroscopy combined with BP neural network. When the detection conditions were optimized in detail, a repetitive adding procedure of silver colloids with the total amount keeping constant was used to improve the enhancement effect of surface enhanced Raman spectroscopy. According to the main Raman peaks and the retention factor of analyte, the data predictive model was established. Under the optimized experimental conditions, this method was successful to apply to detect the artificially produced model samples, and the limit of detection less than 5 mg/kg was obtained. Based on the excellent sensitivity of this method, the real samples have been detected accurately and the detection results were confirmed by high-performance liquid chromatography. In addition, the developed method was suitable for the detection of other adulterants, especially those that have similar chromatographic or spectroscopic behaviors. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Qualitative and quantitative high performance thin layer chromatography analysis of Calendula officinalis using high resolution plate imaging and artificial neural network data modelling.

    Science.gov (United States)

    Agatonovic-Kustrin, S; Loescher, Christine M

    2013-10-10

    Calendula officinalis, commonly known Marigold, has been traditionally used for its anti-inflammatory effects. The aim of this study was to investigate the capacity of an artificial neural network (ANN) to analyse thin layer chromatography (TLC) chromatograms as fingerprint patterns for quantitative estimation of chlorogenic acid, caffeic acid and rutin in Calendula plant extracts. By applying samples with different weight ratios of marker compounds to the system, a database of chromatograms was constructed. A hundred and one signal intensities in each of the HPTLC chromatograms were correlated to the amounts of applied chlorogenic acid, caffeic acid, and rutin using an ANN. The developed ANN correlation was used to quantify the amounts of 3 marker compounds in calendula plant extracts. The minimum quantifiable level (MQL) of 610, 190 and 940 ng and the limit of detection (LD) of 183, 57 and 282 ng were established for chlorogenic, caffeic acid and rutin, respectively. A novel method for quality control of herbal products, based on HPTLC separation, high resolution digital plate imaging and ANN data analysis has been developed. The proposed method can be adopted for routine evaluation of the phytochemical variability in calendula extracts. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. EEG-Based Classification of New Imagery Tasks Using Three-Layer Feedforward Neural Network Classifier for Brain-Computer Interface

    Science.gov (United States)

    Phothisonothai, Montri; Nakagawa, Masahiro

    2006-10-01

    In this paper proposes the classification method of new imagery tasks for simple binary commands approach to a brain-computer interface (BCI). An analysis of imaginary tasks as “yes/no” have been proposed. Since BCI is very helpful technology for the patients who are suffering from severe motor disabilities. The BCI applications can be realized by using an electroencephalogram (EEG) signals recording at the scalp surface through the electrodes. Six healthy subjects (three males and three females), aged 23-30 years, were volunteered to participate in the experiment. During the experiment, 10-questions were used to be stimuli. The feature extraction of the event-related synchronization and event-related desynchronization (ERD/ERS) responses can be determined by the slope coefficient and Euclidian distance (SCED) method. The method uses the three-layer feedforward neural network based on a simple backpropagation algorithm to classify the two feature vectors. The experimental results of the proposed method show the average accuracy rates of 81.5 and 78.8% when the subjects imagine to “yes” and “no”, respectively.

  10. Prediction of Emissions from Biodiesel Fueled Transit Buses Using Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Abhisek Mudgal

    2011-06-01

    Full Text Available The growing demand of freight transportation and passenger cars has led to air pollution, green house gas emissions (especially CO2 and fuel supply concerns. Research has been carried out on biodiesel which is shown to generate lower emissions. However, the amount of emissions generated is not well understood which entails more vigorous data collection and development of emissions models. A comprehensive data collection plan was developed and emissions (NOx, HC, CO, CO2 and PM10 from biodiesel fueled transit buses were collected using a portable emissions measurement system (PEMS. Linear models were developed and tested for each emission. However, the models could not capture the emissions spikes well resulting in very low R2 values. Artificial neural networks (ANNs based models were then employed on this data because of their ability to handle nonlinearity and not requiring assumptions on the input data as needed by statistical models. Sensitivity analysis was performed on the input parameters, number of hidden layers, learning rate and learning algorithm to arrive at an optimum ANN architecture. The optimal architecture for this study was found to be two hidden layers with 50 hidden nodes for each of NOx, HC, CO, and PM and one hidden layer for CO2. The emissions were predicted using best-performance ANN models for each emission. Scatter-plots of observed versus predicted values showed R2 of 0.96, 0.94, 0.82, 0.98 and 0.78 for NOx, HC, CO, CO2 and PM emissions, respectively. Histogram on prediction error showed low frequency for large errors.

  11. AUV fuzzy neural BDI

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    The typical BDI (belief desire intention) model of agent is not efficiently computable and the strict logic expression is not easily applicable to the AUV (autonomous underwater vehicle) domain with uncertainties. In this paper, an AUV fuzzy neural BDI model is proposed. The model is a fuzzy neural network composed of five layers: input ( beliefs and desires) , fuzzification, commitment, fuzzy intention, and defuzzification layer. In the model, the fuzzy commitment rules and neural network are combined to form intentions from beliefs and desires. The model is demonstrated by solving PEG (pursuit-evasion game), and the simulation result is satisfactory.

  12. Neutron spectrum unfolding using artificial neural network and modified least square method

    Science.gov (United States)

    Hosseini, Seyed Abolfazl

    2016-09-01

    In the present paper, neutron spectrum is reconstructed using the Artificial Neural Network (ANN) and Modified Least Square (MLSQR) methods. The detector's response (pulse height distribution) as a required data for unfolding of energy spectrum is calculated using the developed MCNPX-ESUT computational code (MCNPX-Energy engineering of Sharif University of Technology). Unlike the usual methods that apply inversion procedures to unfold the energy spectrum from the Fredholm integral equation, the MLSQR method uses the direct procedure. Since liquid organic scintillators like NE-213 are well suited and routinely used for spectrometry of neutron sources, the neutron pulse height distribution is simulated/measured in the NE-213 detector. The response matrix is calculated using the MCNPX-ESUT computational code through the simulation of NE-213 detector's response to monoenergetic neutron sources. For known neutron pulse height distribution, the energy spectrum of the neutron source is unfolded using the MLSQR method. In the developed multilayer perception neural network for reconstruction of the energy spectrum of the neutron source, there is no need for formation of the response matrix. The multilayer perception neural network is developed based on logsig, tansig and purelin transfer functions. The developed artificial neural network consists of two hidden layers of type hyperbolic tangent sigmoid transfer function and a linear transfer function in the output layer. The motivation of applying the ANN method may be explained by the fact that no matrix inversion is needed for energy spectrum unfolding. The simulated neutron pulse height distributions in each light bin due to randomly generated neutron spectrum are considered as the input data of ANN. Also, the randomly generated energy spectra are considered as the output data of the ANN. Energy spectrum of the neutron source is identified with high accuracy using both MLSQR and ANN methods. The results obtained from

  13. The potential of computer vision, optical backscattering parameters and artificial neural network modelling in monitoring the shrinkage of sweet potato (Ipomoea batatas L.) during drying.

    Science.gov (United States)

    Onwude, Daniel I; Hashim, Norhashila; Abdan, Khalina; Janius, Rimfiel; Chen, Guangnan

    2017-07-30

    Drying is a method used to preserve agricultural crops. During the drying of products with high moisture content, structural changes in shape, volume, area, density and porosity occur. These changes could affect the final quality of dried product and also the effective design of drying equipment. Therefore, this study investigated a novel approach in monitoring and predicting the shrinkage of sweet potato during drying. Drying experiments were conducted at temperatures of 50-70 °C and samples thicknesses of 2-6 mm. The volume and surface area obtained from camera vision, and the perimeter and illuminated area from backscattered optical images were analysed and used to evaluate the shrinkage of sweet potato during drying. The relationship between dimensionless moisture content and shrinkage of sweet potato in terms of volume, surface area, perimeter and illuminated area was found to be linearly correlated. The results also demonstrated that the shrinkage of sweet potato based on computer vision and backscattered optical parameters is affected by the product thickness, drying temperature and drying time. A multilayer perceptron (MLP) artificial neural network with input layer containing three cells, two hidden layers (18 neurons), and five cells for output layer, was used to develop a model that can monitor, control and predict the shrinkage parameters and moisture content of sweet potato slices under different drying conditions. The developed ANN model satisfactorily predicted the shrinkage and dimensionless moisture content of sweet potato with correlation coefficient greater than 0.95. Combined computer vision, laser light backscattering imaging and artificial neural network can be used as a non-destructive, rapid and easily adaptable technique for in-line monitoring, predicting and controlling the shrinkage and moisture changes of food and agricultural crops during drying. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  14. Stability prediction of berm breakwater using neural network

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Rao, S.; Manjunath, Y.R.

    . In order to allow the network to learn both non-linear and linear relationships between input nodes and output nodes, multiple-layer networks are often used. Among many neural network architectures, the three layers feed forward backpropagation neural...

  15. Prediction of the waste stabilization pond performance using linear multiple regression and multi-layer perceptron neural network: a case study of Birjand, Iran

    Directory of Open Access Journals (Sweden)

    Maryam Khodadadi

    2016-06-01

    Full Text Available Background: Data mining (DM is an approach used in extracting valuable information from environmental processes. This research depicts a DM approach used in extracting some information from influent and effluent wastewater characteristic data of a waste stabilization pond (WSP in Birjand, a city in Eastern Iran. Methods: Multiple regression (MR and neural network (NN models were examined using influent characteristics (pH, Biochemical oxygen demand [BOD5], temperature, chemical oxygen demand [COD], total suspended solids [TSS], total dissolved solid [TDS], electrical conductivity [EC] and turbidity as the regression input vectors. Models were adjusted to input attributes, effluent BOD5 (BODout and COD (CODout. The models performances were estimated by 10-fold external cross-validation. An internal 5-fold cross-validation was also used for the training data set in NN model. The models were compared using regression error characteristic (REC plot and other statistical measures such as relative absolute error (RAE. Sensitivity analysis was also applied to extract useful knowledge from NN model. Results: NN models (with RAE = 78.71 ± 1.16 for BODout and 83.67 ± 1.35 for CODout and MR models (with RAE = 84.40% ± 1.07 for BODout and 88.07 ± 0.80 for CODout indicate different performances and the former was better (P < 0.05 for the prediction of both effluent BOD5 and COD parameters. For the prediction of CODout the NN model with hidden layer size (H = 4 and decay factor = 0.75 ± 0.03 presented the best predictive results. For BODout the H and decay factor were found to be 4 and 0.73 ± 0.03, respectively. TDS was found as the most descriptive influent wastewater characteristics for the prediction of the WSP performance. The REC plots confirmed the NN model performance superiority for both BOD and COD effluent prediction. Conclusion: Modeling the performance of WSP systems using NN models along with sensitivity analysis can offer better

  16. The neural crest and neural crest cells: discovery and significance for theories of embryonic organization

    Indian Academy of Sciences (India)

    Brian K Hall

    2008-12-01

    The neural crest has long fascinated developmental biologists, and, increasingly over the past decades, evolutionary and evolutionary developmental biologists. The neural crest is the name given to the fold of ectoderm at the junction between neural and epidermal ectoderm in neurula-stage vertebrate embryos. In this sense, the neural crest is a morphological term akin to head fold or limb bud. This region of the dorsal neural tube consists of neural crest cells, a special population(s) of cell, that give rise to an astonishing number of cell types and to an equally astonishing number of tissues and organs. Neural crest cell contributions may be direct — providing cells — or indirect — providing a necessary, often inductive, environment in which other cells develop. The enormous range of cell types produced provides an important source of evidence of the neural crest as a germ layer, bringing the number of germ layers to four — ectoderm, endoderm, mesoderm, and neural crest. In this paper I provide a brief overview of the major phases of investigation into the neural crest and the major players involved, discuss how the origin of the neural crest relates to the origin of the nervous system in vertebrate embryos, discuss the impact on the germ-layer theory of the discovery of the neural crest and of secondary neurulation, and present evidence of the neural crest as the fourth germ layer. A companion paper (Hall, Evol. Biol. 2008) deals with the evolutionary origins of the neural crest and neural crest cells.

  17. The tidal tails of globular cluster Palomar 5 based on the neural networks method

    Institute of Scientific and Technical Information of China (English)

    Hu Zou; Zhen-Yu WU; Jun Ma; Xu Zhou

    2009-01-01

    The sixth Data Release (DR6) of the Sloan Digital Sky Survey (SDSS) provides more photometric regions,new features and more accurate data around globular cluster Palomar 5.A new method,Back Propagation Neural Network (BPNN),is used to estimate the cluster membership probability in order to detect its tidal tails.Cluster and field stars,used for training the networks,are extracted over a 40×20 deg~2 field by color-magnitude diagrams (CMDs).The best BPNNs with two hidden layers and a Levenberg-Marquardt(LM) training algorithm are determined by the chosen cluster and field samples.The membership probabilities of stars in the whole field are obtained with the BPNNs,and contour maps of the probability distribution show that a tail extends 5.42°to the north of the cluster and another tail extends 3.77°to the south.The tails are similar to those detected by Odenkirchen et al.,but no more debris from the cluster is found to the northeast in the sky.The radial density profiles are investigated both along the tails and near the cluster center.Quite a few substructures are discovered in the tails.The number density profile of the cluster is fitted with the King model and the tidal radius is determined as 14.28'.However,the King model cannot fit the observed profile at the outer regions (R > 8') because of the tidal tails generated by the tidal force.Luminosity functions of the cluster and the tidal tails are calculated,which confirm that the tails originate from Palomar 5.

  18. Prediction of protein domain structural class based on secondary structure contentsby BP neural networks with a competitive layer%基于蛋白质二级结构内容的域结构类预测

    Institute of Scientific and Technical Information of China (English)

    闫化军; 章毅

    2004-01-01

    In this paper,the problem of prediction of protein/domain structural class based on their secondary structure contents has been investigated via BP neural networks (Multiple Layer perceptron with Back-Propogation algorithm) plus a competitive layer.By embedding a competitive layer into the network,the prediction accuracy can be significantly improved.With a small training set and a simple network architecture,a high prediction accuracy has been achieved,i.e.,self-consistence accuracy 97.62%,jack-knife test accuracy 97.62% and extrapolating accuracy 90.74% on average.It is believed that the neural networks of this paper can provide a more appropriate protein/domain structural class assignment criterion with a complete classification attribute vector and a bigger and more representative training set built up.%运用加入竞争层的BP网络,研究了基于蛋白质二级结构内容的域结构类预测问题.在BP网络中嵌入一竞争,层显著提高了网络预测性能.仅使用了一个小的训练集和简单的网络结构,获得了很高的预测精度:自支持精度97.62%,jack-knife测试精度97.62%,及平均外推精度90.74%.在建立更完备的域结构类特征向量和更有代表性的训练集的基础上,所述方法将为蛋白质域结构分类领域提供新的分类基准.

  19. Novel quantum inspired binary neural network algorithm

    Indian Academy of Sciences (India)

    OM PRAKASH PATEL; ARUNA TIWARI

    2016-11-01

    In this paper, a quantum based binary neural network algorithm is proposed, named as novel quantum binary neural network algorithm (NQ-BNN). It forms a neural network structure by deciding weights and separability parameter in quantum based manner. Quantum computing concept represents solution probabilistically and gives large search space to find optimal value of required parameters using Gaussian random number generator. The neural network structure forms constructively having three number of layers input layer: hidden layer and output layer. A constructive way of deciding the network eliminates the unnecessary training of neural network. A new parameter that is a quantum separability parameter (QSP) is introduced here, which finds an optimal separability plane to classify input samples. During learning, it searches for an optimal separability plane. This parameter is taken as the threshold of neuron for learning of neural network. This algorithm is tested with three benchmark datasets and produces improved results than existing quantum inspired and other classification approaches.

  20. Neural Networks for Non-linear Control

    DEFF Research Database (Denmark)

    Sørensen, O.

    1994-01-01

    This paper describes how a neural network, structured as a Multi Layer Perceptron, is trained to predict, simulate and control a non-linear process.......This paper describes how a neural network, structured as a Multi Layer Perceptron, is trained to predict, simulate and control a non-linear process....

  1. Neural Networks for Non-linear Control

    DEFF Research Database (Denmark)

    Sørensen, O.

    1994-01-01

    This paper describes how a neural network, structured as a Multi Layer Perceptron, is trained to predict, simulate and control a non-linear process.......This paper describes how a neural network, structured as a Multi Layer Perceptron, is trained to predict, simulate and control a non-linear process....

  2. Medical diagnosis using neural network

    CERN Document Server

    Kamruzzaman, S M; Siddiquee, Abu Bakar; Mazumder, Md Ehsanul Hoque

    2010-01-01

    This research is to search for alternatives to the resolution of complex medical diagnosis where human knowledge should be apprehended in a general fashion. Successful application examples show that human diagnostic capabilities are significantly worse than the neural diagnostic system. This paper describes a modified feedforward neural network constructive algorithm (MFNNCA), a new algorithm for medical diagnosis. The new constructive algorithm with backpropagation; offer an approach for the incremental construction of near-minimal neural network architectures for pattern classification. The algorithm starts with minimal number of hidden units in the single hidden layer; additional units are added to the hidden layer one at a time to improve the accuracy of the network and to get an optimal size of a neural network. The MFNNCA was tested on several benchmarking classification problems including the cancer, heart disease and diabetes. Experimental results show that the MFNNCA can produce optimal neural networ...

  3. Building a Chaotic Proved Neural Network

    CERN Document Server

    Bahi, Jacques M; Salomon, Michel

    2011-01-01

    Chaotic neural networks have received a great deal of attention these last years. In this paper we establish a precise correspondence between the so-called chaotic iterations and a particular class of artificial neural networks: global recurrent multi-layer perceptrons. We show formally that it is possible to make these iterations behave chaotically, as defined by Devaney, and thus we obtain the first neural networks proven chaotic. Several neural networks with different architectures are trained to exhibit a chaotical behavior.

  4. An implementation of the Levenberg-Marquardt algorithm for simultaneous-energy-gradient fitting using two-layer feed-forward neural networks

    Science.gov (United States)

    Nguyen-Truong, Hieu T.; Le, Hung M.

    2015-06-01

    We present in this study a new and robust algorithm for feed-forward neural network (NN) fitting. This method is developed for the application in potential energy surface (PES) construction, in which simultaneous energy-gradient fitting is implemented using the well-established Levenberg-Marquardt (LM) algorithm. Three fitting examples are demonstrated, which include the vibrational PES of H2O, reactive PESs of O3 and ClOOCl. In the three testing cases, our new LM implementation has been shown to work very efficiently. Not only increasing fitting accuracy, it also offers two other advantages: less training iterations are utilized and less data points are required for fitting.

  5. The Application of Neural Network in Lifetime Prediction of Concrete

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    There are many difficulties in concrete endurance prediction, especially in accurate predicting service life of concrete engineering. It is determined by the concentration of SO2-4/ Mg2+/Cl-/Ca2+, reaction areas, the cycles of freezing and dissolving, alternatives of dry and wet state, the kind of cement, etc.. In general, because of complexity itself and cognitive limitation, endurance prediction under sulphate erosion is still illegible and uncertain,so this paper adopts neural network technology to research this problem. Through analyzing, the paper sets up a 3-levels neural network and a 4-levels neural network to predict the endurance under sulphate erosion. The 3-levels neural network includes 13 inputting nodes, 7 outputting nodes and 34 hidden nodes. The 4-levels neural network also has 13 inputting nodes and 7 outputting nodes with two hidden levels which has 7 nodes and 8 nodes separately. In the end the paper give a example with laboratorial data and discussion the result and deviation. The paper shows that deviation results from some faults of training specimens:such as few training specimens and few distinctions among training specimens. So the more specimens should be collected to reduce data redundancy and improve the reliability of network analysis conclusion.

  6. Coatings of nanostructured pristine graphene-IrOx hybrids for neural electrodes: Layered stacking and the role of non-oxygenated graphene

    Energy Technology Data Exchange (ETDEWEB)

    Pérez, E. [Institut Ciència de Materials de Barcelona (ICMAB-CSIC), Campus UAB, E-08193, Bellaterra, Barcelona (Spain); Lichtenstein, M.P.; Suñol, C. [Institut d' Investigacions Biomèdiques de Barcelona (IIBB-CSIC), Institut d' Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), c/Rosselló 161, 08036 Barcelona (Spain); Casañ-Pastor, N., E-mail: nieves@icmab.es [Institut Ciència de Materials de Barcelona (ICMAB-CSIC), Campus UAB, E-08193, Bellaterra, Barcelona (Spain)

    2015-10-01

    The need to enhance charge capacity in neural stimulation-electrodes is promoting the formation of new materials and coatings. Among all the possible types of graphene, pristine graphene prepared by graphite electrochemical exfoliation, is used in this work to form a new nanostructured IrOx–graphene hybrid (IrOx–eG). Graphene is stabilized in suspension by IrOx nanoparticles without surfactants. Anodic electrodeposition results in coatings with much smaller roughness than IrOx–graphene oxide. Exfoliated pristine graphene (eG), does not electrodeposit in absence of iridium, but IrOx-nanoparticle adhesion on graphene flakes drives the process. IrOx–eG has a significantly different electronic state than graphene oxide, and different coordination for carbon. Electron diffraction shows the reflection features expected for graphene. IrOx 1–2 nm cluster/nanoparticles are oxohydroxo-species and adhere to 10 nm graphene platelets. eG induces charge storage capacity values five times larger than in pure IrOx, and if calculated per carbon atom, this enhancement is one order magnitude larger than the induced by graphene oxide. IrOx–eG coatings show optimal in vitro neural cell viability and function as cell culture substrates. The fully straightforward electrochemical exfoliation and electrodeposition constitutes a step towards the application of graphene in biomedical systems, expanding the knowledge of pristine graphene vs. graphene oxide, in bioelectrodes. - Highlights: • Pristine Graphene is incorporated in coatings as nanostructured IrOx–eG hybrid. • IrOx-nanoparticles drive the electrodeposition of graphene. • Hybrid CSC is one order of magnitude the charge capacity of IrOx. • Per carbon atom, the CSC increase is 35 times larger than for graphene oxide. • Neurons are fully functional on the coating.

  7. Compressing Convolutional Neural Networks

    OpenAIRE

    Chen, Wenlin; Wilson, James T.; Tyree, Stephen; Weinberger, Kilian Q.; Chen, Yixin

    2015-01-01

    Convolutional neural networks (CNN) are increasingly used in many areas of computer vision. They are particularly attractive because of their ability to "absorb" great quantities of labeled data through millions of parameters. However, as model sizes increase, so do the storage and memory requirements of the classifiers. We present a novel network architecture, Frequency-Sensitive Hashed Nets (FreshNets), which exploits inherent redundancy in both convolutional layers and fully-connected laye...

  8. Application of neural networks in coastal engineering - An overview

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Patil, S.G.; Manjunatha, Y.R.; Hegde, A.V.

    prediction, wave tranquility studies and near shore morphology are highlighted in this paper. 2 Feed forward neural network A neural network model is interconnected by several neurons. Generally, neuron model consists of three layers namely input layer.... Three-layered feed forward neural network INPUT LAYER HIDDEN LAYER OUTPUT LAYER WEIGHTS BIAS SINGLE NEURON NODE 2 0 )( 2 1 ∑ = −= N k kkp tOE (3) ko M i rKjk bzTwxy +×= ∑ −1 )()( ∑ − +×= D i jiiji bxwz 1 (1) (2) (4) ∑ = = P p p E p E 1 1...

  9. Application of BP Neural Network in Distortion Correction of Large FOV Display%BP神经网络用于大视场显示设备的畸变校正

    Institute of Scientific and Technical Information of China (English)

    田立坤; 刘晓宏; 李洁

    2012-01-01

    Geometric distortion may appear in large Field-of-View ( FOV) electro-optic image display equipment, which is caused by the optical system. To improve image distortion correction effect and overcome the limitations of the traditional BP algorithm, such as local minimum and slow convergence speed, Levenberg-Marquardt algorithm based on optimizing theory was used. Then the distortion correction method based on BP neural network containing two hidden layers was proposed, which could achieve high precision of mapping between distortion image and original image self-adaptively without knowing the mathematic model. The algorithms were analyzed and compared in depth in the Matlab platform. The simulation result shows that the BP neural network algorithm with double hidden layers can be realized easily, achieve high precision, and has good data processing ability. Compared with the distortion correction model based on polynomial fitting method, all the precision indexes of the distortion correction model based on BP neural network with double hidden layers are improved observably.%大视场光电成像显示设备中会出现光学系统引起的图像几何畸变现象.为了提高显示设备畸变校正效果,并克服传统BP算法存在局部极小点、收敛速度慢等缺点,采用了基于优化理论的LM算法来改进传统BP神经网络算法.提出一种含有双层隐含层的BP神经网络畸变校正方法,可在不确知畸变数学模型情况下,实现自适应地建立畸变图像与原始图像之间的高精度映射关系.在Matlab平台上进行算法的深入分析和比较.仿真结果表明,双隐含层BP神经网络算法易于实现、数据处理能力强、校正精度高.与多项式拟合方法的畸变校正模型相比,基于双隐含层BP神经网络算法的畸变校正模型的各项精度指标提升显著.

  10. Traffic signs recognition applying with deep-layer convolution neural network%应用深层卷积神经网络的交通标志识别

    Institute of Scientific and Technical Information of China (English)

    黄琳; 张尤赛

    2015-01-01

    In actual traffic circumstance,the image quality of collected traffic signs is worse due to motion blur,back⁃ground disturbances,weather conditions,and shooting angle. The higher requirements are proposed for accuracy,robustness and in real⁃time of traffic signs auto⁃recognition. In this situation,the traffic signs recognition method based on deep⁃layer convo⁃lution neural network is presented. The method adopts supervised learning model of deep⁃layer convolution neural network. Taking the collected traffic sign images through binarization as the inputs. The inputs are conducted multilayer process of convolution and pooling⁃sampling,to simulate hierarchical structure of human brain perception visual signals,and extract the characteristics of the traffic sign images automatically. Traffic signs recognition is realized by using a full connected network. The experimental results show that the method can extract the characteristics of traffic signs automatically by using deep learning ability of the con⁃volution neural network. The method has good generalization ability and adaptive range. By using this method,the traditional ar⁃tificial feature extraction is avoided,and the efficiency of traffic signs recognition is improved effectively.%在实际交通环境中,由于运动模糊、背景干扰、天气条件以及拍摄视角等因素,所采集的交通标志的图像质量往往不高,这就对交通标志自动识别的准确性、鲁棒性和实时性提出了很高的要求。针对这一情况,提出一种基于深层卷积神经网络的交通标志识别方法。该方法采用深层卷积神经网络的有监督学习模型,直接将采集的交通标志图像经二值化后作为输入,通过卷积和池采样的多层处理,来模拟人脑感知视觉信号的层次结构,自动地提取交通标志图像的特征,最后再利用一个全连接的网络实现交通标志的识别。实验结果表明,该方法利用

  11. Neural differentiation and synaptogenesis in retinal development.

    Science.gov (United States)

    Fan, Wen-Juan; Li, Xue; Yao, Huan-Ling; Deng, Jie-Xin; Liu, Hong-Liang; Cui, Zhan-Jun; Wang, Qiang; Wu, Ping; Deng, Jin-Bo

    2016-02-01

    To investigate the pattern of neural differentiation and synaptogenesis in the mouse retina, immunolabeling, BrdU assay and transmission electron microscopy were used. We show that the neuroblastic cell layer is the germinal zone for neural differentiation and retinal lamination. Ganglion cells differentiated initially at embryonic day 13 (E13), and at E18 horizontal cells appeared in the neuroblastic cell layer. Neural stem cells in the outer neuroblastic cell layer differentiated into photoreceptor cells as early as postnatal day 0 (P0), and neural stem cells in the inner neuroblastic cell layer differentiated into bipolar cells at P7. Synapses in the retina were mainly located in the outer and inner plexiform layers. At P7, synaptophysin immunostaining appeared in presynaptic terminals in the outer and inner plexiform layers with button-like structures. After P14, presynaptic buttons were concentrated in outer and inner plexiform layers with strong staining. These data indicate that neural differentiation and synaptogenesis in the retina play important roles in the formation of retinal neural circuitry. Our study showed that the period before P14, especially between P0 and P14, represents a critical period during retinal development. Mouse eye opening occurs during that period, suggesting that cell differentiation and synaptic formation lead to the attainment of visual function.

  12. Autonomous Navigation Apparatus With Neural Network for a Mobile Vehicle

    Science.gov (United States)

    Quraishi, Naveed (Inventor)

    1996-01-01

    An autonomous navigation system for a mobile vehicle arranged to move within an environment includes a plurality of sensors arranged on the vehicle and at least one neural network including an input layer coupled to the sensors, a hidden layer coupled to the input layer, and an output layer coupled to the hidden layer. The neural network produces output signals representing respective positions of the vehicle, such as the X coordinate, the Y coordinate, and the angular orientation of the vehicle. A plurality of patch locations within the environment are used to train the neural networks to produce the correct outputs in response to the distances sensed.

  13. SOLVING INVERSE KINEMATICS OF REDUNDANT MANIPULATOR BASED ON NEURAL NETWORK

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    For the redundant manipulators, neural network is used to tackle the velocity inverse kinematics of robot manipulators. The neural networks utilized are multi-layered perceptions with a back-propagation training algorithm. The weight table is used to save the weights solving the inverse kinematics based on the different optimization performance criteria. Simulations verify the effectiveness of using neural network.

  14. Neural Networks in Control Applications

    DEFF Research Database (Denmark)

    Sørensen, O.

    The intention of this report is to make a systematic examination of the possibilities of applying neural networks in those technical areas, which are familiar to a control engineer. In other words, the potential of neural networks in control applications is given higher priority than a detailed...... study of the networks themselves. With this end in view the following restrictions have been made: - Amongst numerous neural network structures, only the Multi Layer Perceptron (a feed-forward network) is applied. - Amongst numerous training algorithms, only four algorithms are examined, all...... in a recursive form (sample updating). The simplest is the Back Probagation Error Algorithm, and the most complex is the recursive Prediction Error Method using a Gauss-Newton search direction. - Over-fitting is often considered to be a serious problem when training neural networks. This problem is specifically...

  15. Identification of Non-Linear Structures using Recurrent Neural Networks

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Nielsen, Søren R. K.; Hansen, H. I.

    1995-01-01

    Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure.......Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure....

  16. Identification of Non-Linear Structures using Recurrent Neural Networks

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Nielsen, Søren R. K.; Hansen, H. I.

    Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure.......Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure....

  17. Identification of Non-Linear Structures using Recurrent Neural Networks

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Nielsen, Søren R. K.; Hansen, H. I.

    1995-01-01

    Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure.......Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure....

  18. Estimation of concrete compressive strength using artificial neural network

    OpenAIRE

    Kostić, Srđan; Vasović, Dejan

    2015-01-01

    In present paper, concrete compressive strength is evaluated using back propagation feed-forward artificial neural network. Training of neural network is performed using Levenberg-Marquardt learning algorithm for four architectures of artificial neural networks, one, three, eight and twelve nodes in a hidden layer in order to avoid the occurrence of overfitting. Training, validation and testing of neural network is conducted for 75 concrete samples with distinct w/c ratio and amount of superp...

  19. Neural processing-type displacement sensor employing multimode waveguide

    Science.gov (United States)

    Aisawa, Shigeki; Noguchi, Kazuhiro; Matsumoto, Takao

    1991-04-01

    A novel neural processing-type displacement sensor, consisting of a multimode waveguide and a neural network, is demonstrated. This sensor detects displacement using changes in the interference output image of the waveguide. The interference image is directly processed by a three-layer perceptron neural network. Environmental change, such as the intensity fluctuation, and change of the temperature can be followed by training the neural network. Experimental results show that the sensor has a resolution of 1 micron.

  20. via dynamic neural networks

    Directory of Open Access Journals (Sweden)

    J. Reyes-Reyes

    2000-01-01

    Full Text Available In this paper, an adaptive technique is suggested to provide the passivity property for a class of partially known SISO nonlinear systems. A simple Dynamic Neural Network (DNN, containing only two neurons and without any hidden-layers, is used to identify the unknown nonlinear system. By means of a Lyapunov-like analysis the new learning law for this DNN, guarantying both successful identification and passivation effects, is derived. Based on this adaptive DNN model, an adaptive feedback controller, serving for wide class of nonlinear systems with an a priori incomplete model description, is designed. Two typical examples illustrate the effectiveness of the suggested approach.

  1. Temporal and Spatial prediction of groundwater levels using Artificial Neural Networks, Fuzzy logic and Kriging interpolation.

    Science.gov (United States)

    Tapoglou, Evdokia; Karatzas, George P.; Trichakis, Ioannis C.; Varouchakis, Emmanouil A.

    2014-05-01

    the ANN. Therefore, the neighborhood of each prediction point is the best available. Then, the appropriate variogram is determined, by fitting the experimental variogram to a theoretical variogram model. Three models are examined, the linear, the exponential and the power-law. Finally, the hydraulic head change is predicted for every grid cell and for every time step used. All the algorithms used were developed in Visual Basic .NET, while the visualization of the results was performed in MATLAB using the .NET COM Interoperability. The results are evaluated using leave one out cross-validation and various performance indicators. The best results were achieved by using ANNs with two hidden layers, consisting of 20 and 15 nodes respectively and by using power-law variogram with the fuzzy logic system.

  2. Dynamic Analysis of Structures Using Neural Networks

    Directory of Open Access Journals (Sweden)

    N. Ahmadi

    2008-01-01

    Full Text Available In the recent years, neural networks are considered as the best candidate for fast approximation with arbitrary accuracy in the time consuming problems. Dynamic analysis of structures against earthquake has the time consuming process. We employed two kinds of neural networks: Generalized Regression neural network (GR and Back-Propagation Wavenet neural network (BPW, for approximating of dynamic time history response of frame structures. GR is a traditional radial basis function neural network while BPW categorized as a wavelet neural network. In BPW, sigmoid activation functions of hidden layer neurons are substituted with wavelets and weights training are achieved using Scaled Conjugate Gradient (SCG algorithm. Comparison the results of BPW with those of GR in the dynamic analysis of eight story steel frame indicates that accuracy of the properly trained BPW was better than that of GR and therefore, BPW can be efficiently used for approximate dynamic analysis of structures.

  3. Visual Grouping by Neural Oscillators

    CERN Document Server

    Yu, Guoshen

    2008-01-01

    Distributed synchronization is known to occur at several scales in the brain, and has been suggested as playing a key functional role in perceptual grouping. State-of-the-art visual grouping algorithms, however, seem to give comparatively little attention to neural synchronization analogies. Based on the framework of concurrent synchronization of dynamic systems, simple networks of neural oscillators coupled with diffusive connections are proposed to solve visual grouping problems. Multi-layer algorithms and feedback mechanisms are also studied. The same algorithm is shown to achieve promising results on several classical visual grouping problems, including point clustering, contour integration and image segmentation.

  4. Matrix representation of a Neural Network

    DEFF Research Database (Denmark)

    Christensen, Bjørn Klint

    This paper describes the implementation of a three-layer feedforward backpropagation neural network. The paper does not explain feedforward, backpropagation or what a neural network is. It is assumed, that the reader knows all this. If not please read chapters 2, 8 and 9 in Parallel Distributed...... Processing, by David Rummelhart (Rummelhart 1986) for an easy-to-read introduction. What the paper does explain is how a matrix representation of a neural net allows for a very simple implementation. The matrix representation is introduced in (Rummelhart 1986, chapter 9), but only for a two-layer linear...... network and the feedforward algorithm. This paper develops the idea further to three-layer non-linear networks and the backpropagation algorithm. Figure 1 shows the layout of a three-layer network. There are I input nodes, J hidden nodes and K output nodes all indexed from 0. Bias-node for the hidden...

  5. Desien, ConstruThe design, fabrication and evaluation of egg weighing device using capacitive sensor and neural networksction and Evaluation of Egg Weighing Device Using Capacitive Sensor and Neural Networks

    Directory of Open Access Journals (Sweden)

    S Khalili

    2015-09-01

    egg-laying day, and the second and fourth day after laying. Results and Discussion: In this study, two networks were built and evaluated. In the first series, two-layer networks and in the second series, three-layer networks were developed. In the two-layer neural networks, the number of neurons in the hidden layer was changed from 2 to 10.According to the given results for two-layer networks, two layer networks with 10 neurons offer the best results (the highest R-value and minimum RMSE and it can be chosen as the most effective two-layer network. Three-layer neural networks have been composed of two hidden layers. The number of neurons in the first hidden layer was 10 and in the second layer it was changed from 1 to 20. Between three-layer networks, the network with 7 neurons with the highest R-value and the lowest error is the most appropriate network. It is even more efficient than the two-layer network with 10 neurons. So, the most appropriate structure is 1-7-10-16 and it has been selected for calibration of the weighing device. To evaluate and assess the accuracy of the weighing machine, weights of 24 samples of fresh eggs were predicted and compared with the actual values obtained using a digital scale with the accuracy of 0.01 gr. The paired t-test has been used to compare the measured and predicted values and the Bland-Altman method has been used for charting the accordance between the measured and predicted values. Based on the findings, the difference between the measured and predicted values was observed up to 5.4 gr that is related to a very large sample. The mean absolute error is equal to 2.21 gr and the mean absolute percentage error is equal to 3.75 %. According to the findings, 95% of the actual and approximate matching range to compare the two weighing methods is between -5.3 gr and 3.36 gr. Thus, the dielectric technique may underestimate the egg weight up to 5.3 gr or it may overestimate it up to 3.36 gr more than the actual prediction

  6. FGF Signaling Transforms Non-neural Ectoderm into Neural Crest

    OpenAIRE

    Yardley, Nathan; García-Castro, Martín I.

    2012-01-01

    The neural crest arises at the border between the neural plate and the adjacent non-neural ectoderm. It has been suggested that both neural and non-neural ectoderm can contribute to the neural crest. Several studies have examined the molecular mechanisms that regulate neural crest induction in neuralized tissues or the neural plate border. Here, using the chick as a model system, we address the molecular mechanisms by which non-neural ectoderm generates neural crest. We report that in respons...

  7. Fate Specification of Neural Plate Border by Canonical Wnt Signaling and Grhl3 is Crucial for Neural Tube Closure.

    Science.gov (United States)

    Kimura-Yoshida, Chiharu; Mochida, Kyoko; Ellwanger, Kristina; Niehrs, Christof; Matsuo, Isao

    2015-06-01

    During primary neurulation, the separation of a single-layered ectodermal sheet into the surface ectoderm (SE) and neural tube specifies SE and neural ectoderm (NE) cell fates. The mechanisms underlying fate specification in conjunction with neural tube closure are poorly understood. Here, by comparing expression profiles between SE and NE lineages, we observed that uncommitted progenitor cells, expressing stem cell markers, are present in the neural plate border/neural fold prior to neural tube closure. Our results also demonstrated that canonical Wnt and its antagonists, DKK1/KREMEN1, progressively specify these progenitors into SE or NE fates in accord with the progress of neural tube closure. Additionally, SE specification of the neural plate border via canonical Wnt signaling is directed by the grainyhead-like 3 (Grhl3) transcription factor. Thus, we propose that the fate specification of uncommitted progenitors in the neural plate border by canonical Wnt signaling and its downstream effector Grhl3 is crucial for neural tube closure. This study implicates that failure in critical genetic factors controlling fate specification of progenitor cells in the neural plate border/neural fold coordinated with neural tube closure may be potential causes of human neural tube defects.

  8. A multilayer recurrent neural network for solving continuous-time algebraic Riccati equations.

    Science.gov (United States)

    Wang, Jun; Wu, Guang

    1998-07-01

    A multilayer recurrent neural network is proposed for solving continuous-time algebraic matrix Riccati equations in real time. The proposed recurrent neural network consists of four bidirectionally connected layers. Each layer consists of an array of neurons. The proposed recurrent neural network is shown to be capable of solving algebraic Riccati equations and synthesizing linear-quadratic control systems in real time. Analytical results on stability of the recurrent neural network and solvability of algebraic Riccati equations by use of the recurrent neural network are discussed. The operating characteristics of the recurrent neural network are also demonstrated through three illustrative examples.

  9. Neural network fault diagnosis method optimization with rough set and genetic algorithms

    Institute of Scientific and Technical Information of China (English)

    SUN Hong-yan; XIE Zhi-jiang; OUYANG Qi

    2006-01-01

    Aiming at the disadvantages of BP model in artificial neural networks applied to intelligent fault diagnosis, neural network fault diagnosis optimization method with rough sets and genetic algorithms are presented. The neural network nodes of the input layer can be calculated and simplified through rough sets theory; The neural network nodes of the middle layer are designed through genetic algorithms training; the neural network bottom-up weights and bias are obtained finally through the combination of genetic algorithms and BP algorithms. The analysis in this paper illustrates that the optimization method can improve the performance of the neural network fault diagnosis method greatly.

  10. Neural-estimator for the surface emission rate of atmospheric gases

    CERN Document Server

    Paes, F F

    2009-01-01

    The emission rate of minority atmospheric gases is inferred by a new approach based on neural networks. The neural network applied is the multi-layer perceptron with backpropagation algorithm for learning. The identification of these surface fluxes is an inverse problem. A comparison between the new neural-inversion and regularized inverse solution id performed. The results obtained from the neural networks are significantly better. In addition, the inversion with the neural netwroks is fster than regularized approaches, after training.

  11. Drift chamber tracking with neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Lindsey, C.S.; Denby, B.; Haggerty, H.

    1992-10-01

    We discuss drift chamber tracking with a commercial log VLSI neural network chip. Voltages proportional to the drift times in a 4-layer drift chamber were presented to the Intel ETANN chip. The network was trained to provide the intercept and slope of straight tracks traversing the chamber. The outputs were recorded and later compared off line to conventional track fits. Two types of network architectures were studied. Applications of neural network tracking to high energy physics detector triggers is discussed.

  12. NARX neural networks for sequence processing tasks

    OpenAIRE

    Hristev, Eugen

    2012-01-01

    This project aims at researching and implementing a neural network architecture system for the NARX (Nonlinear AutoRegressive with eXogenous inputs) model, used in sequence processing tasks and particularly in time series prediction. The model can fallback to different types of architectures including time-delay neural networks and multi layer perceptron. The NARX simulator tests and compares the different architectures for both synthetic and real data, including the time series o...

  13. International Conference on Artificial Neural Networks (ICANN)

    CERN Document Server

    Mladenov, Valeri; Kasabov, Nikola; Artificial Neural Networks : Methods and Applications in Bio-/Neuroinformatics

    2015-01-01

    The book reports on the latest theories on artificial neural networks, with a special emphasis on bio-neuroinformatics methods. It includes twenty-three papers selected from among the best contributions on bio-neuroinformatics-related issues, which were presented at the International Conference on Artificial Neural Networks, held in Sofia, Bulgaria, on September 10-13, 2013 (ICANN 2013). The book covers a broad range of topics concerning the theory and applications of artificial neural networks, including recurrent neural networks, super-Turing computation and reservoir computing, double-layer vector perceptrons, nonnegative matrix factorization, bio-inspired models of cell communities, Gestalt laws, embodied theory of language understanding, saccadic gaze shifts and memory formation, and new training algorithms for Deep Boltzmann Machines, as well as dynamic neural networks and kernel machines. It also reports on new approaches to reinforcement learning, optimal control of discrete time-delay systems, new al...

  14. Neural Network Applications

    NARCIS (Netherlands)

    Vonk, E.; Jain, L.C.; Veelenturf, L.P.J.

    1995-01-01

    Artificial neural networks, also called neural networks, have been used successfully in many fields including engineering, science and business. This paper presents the implementation of several neural network simulators and their applications in character recognition and other engineering areas

  15. Genetic algorithm for neural networks optimization

    Science.gov (United States)

    Setyawati, Bina R.; Creese, Robert C.; Sahirman, Sidharta

    2004-11-01

    This paper examines the forecasting performance of multi-layer feed forward neural networks in modeling a particular foreign exchange rates, i.e. Japanese Yen/US Dollar. The effects of two learning methods, Back Propagation and Genetic Algorithm, in which the neural network topology and other parameters fixed, were investigated. The early results indicate that the application of this hybrid system seems to be well suited for the forecasting of foreign exchange rates. The Neural Networks and Genetic Algorithm were programmed using MATLAB«.

  16. Modeling of Magneto-Rheological Damper with Neural Network

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    With the revival of magnetorheological technology research in the 1980's, its application in vehicles is increasingly focused on vibration suppression. Based on the importance of magnetorheological damper modeling, nonparametric modeling with neural network, which is a promising development in semi-active online control of vehicles with MR suspension, has been carried out in this study. A two layer neural network with 7 neurons in a hidden layer and 3 inputs and 1 output was established to simulate the behavior of MR damper at different excitation currents. In the neural network modeling, the damping force is a function of displacement, velocity and the applied current. A MR damper for vehicles is fabricated and tested by MTS; the data acquired are utilized for neural network training and validation. The application and validation show that the predicted forces of the neural network match well with the forces tested with a small variance, which demonstrates the effectiveness and precision of neural network modeling.

  17. Performance Comparison of Neural Networks for HRTFs Approximation

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    In order to approach to head-related transfer functions (HRTFs), this paper employs and compares three kinds of one-input neural network models, namely, multi-layer perceptron (MLP) networks, radial basis function (RBF) networks and wavelet neural networks (WNN) so as to select the best network model for further HRTFs approximation. Experimental results demonstrate that wavelet neural networks are more efficient and useful.

  18. An Approach to Structural Approximation Analysis by Artificial Neural Networks

    Institute of Scientific and Technical Information of China (English)

    陆金桂; 周济; 王浩; 陈新度; 余俊; 肖世德

    1994-01-01

    This paper theoretically proves that a three-layer neural network can be applied to implementing exactly the function between the stresses and displacements and the design variables of any elastic structure based on the Kolmogorov’s mapping neural network existence theorem. A new approach to the structural approximation analysis with the global characteristic based on artificial neural networks is presented. The computer simulation experiments made by this paper show that the new approach is effective.

  19. FUZZY NEURAL NETWORK FOR OBJECT IDENTIFICATION ON INTEGRATED CIRCUIT LAYOUTS

    Directory of Open Access Journals (Sweden)

    A. A. Doudkin

    2015-01-01

    Full Text Available Fuzzy neural network model based on neocognitron is proposed to identify layout objects on images of topological layers of integrated circuits. Testing of the model on images of real chip layouts was showed a highеr degree of identification of the proposed neural network in comparison to base neocognitron.

  20. Weight-decay induced phase transitions in multilayer neural networks

    NARCIS (Netherlands)

    Ahr, M.; Biehl, M.; Schlösser, E.

    1999-01-01

    We investigate layered neural networks with differentiable activation function and student vectors without normalization constraint by means of equilibrium statistical physics. We consider the learning of perfectly realizable rules and find that the length of student vectors becomes infinite, unless

  1. Neural Induction, Neural Fate Stabilization, and Neural Stem Cells

    Directory of Open Access Journals (Sweden)

    Sally A. Moody

    2002-01-01

    Full Text Available The promise of stem cell therapy is expected to greatly benefit the treatment of neurodegenerative diseases. An underlying biological reason for the progressive functional losses associated with these diseases is the extremely low natural rate of self-repair in the nervous system. Although the mature CNS harbors a limited number of self-renewing stem cells, these make a significant contribution to only a few areas of brain. Therefore, it is particularly important to understand how to manipulate embryonic stem cells and adult neural stem cells so their descendants can repopulate and functionally repair damaged brain regions. A large knowledge base has been gathered about the normal processes of neural development. The time has come for this information to be applied to the problems of obtaining sufficient, neurally committed stem cells for clinical use. In this article we review the process of neural induction, by which the embryonic ectodermal cells are directed to form the neural plate, and the process of neural�fate stabilization, by which neural plate cells expand in number and consolidate their neural fate. We will present the current knowledge of the transcription factors and signaling molecules that are known to be involved in these processes. We will discuss how these factors may be relevant to manipulating embryonic stem cells to express a neural fate and to produce large numbers of neurally committed, yet undifferentiated, stem cells for transplantation therapies.

  2. Artificial neural network intelligent method for prediction

    Science.gov (United States)

    Trifonov, Roumen; Yoshinov, Radoslav; Pavlova, Galya; Tsochev, Georgi

    2017-09-01

    Accounting and financial classification and prediction problems are high challenge and researchers use different methods to solve them. Methods and instruments for short time prediction of financial operations using artificial neural network are considered. The methods, used for prediction of financial data as well as the developed forecasting system with neural network are described in the paper. The architecture of a neural network used four different technical indicators, which are based on the raw data and the current day of the week is presented. The network developed is used for forecasting movement of stock prices one day ahead and consists of an input layer, one hidden layer and an output layer. The training method is algorithm with back propagation of the error. The main advantage of the developed system is self-determination of the optimal topology of neural network, due to which it becomes flexible and more precise The proposed system with neural network is universal and can be applied to various financial instruments using only basic technical indicators as input data.

  3. Natural melanin composites by layer-by-layer assembly

    Science.gov (United States)

    Eom, Taesik; Shim, Bong Sub

    2015-04-01

    Melanin is an electrically conductive and biocompatible material, because their conjugated backbone structures provide conducting pathways from human skin, eyes, brain, and beyond. So there is a potential of using as materials for the neural interfaces and the implantable devices. Extracted from Sepia officinalis ink, our natural melanin was uniformly dispersed in mostly polar solvents such as water and alcohols. Then, the dispersed melanin was further fabricated to nano-thin layered composites by the layer-by-layer (LBL) assembly technique. Combined with polyvinyl alcohol (PVA), the melanin nanoparticles behave as an LBL counterpart to from finely tuned nanostructured films. The LBL process can adjust the smart performances of the composites by varying the layering conditions and sandwich thickness. We further demonstrated the melanin loading degree of stacked layers, combination nanostructures, electrical properties, and biocompatibility of the resulting composites by UV-vis spectrophotometer, scanning electron microscope (SEM), multimeter, and in-vitro cell test of PC12, respectively.

  4. ESTUDIO DE SERIES TEMPORALES DE CONTAMINACIÓN AMBIENTAL MEDIANTE TÉCNICAS DE REDES NEURONALES ARTIFICIALES TIME SERIES ANALYSIS OF ATMOSPHERE POLLUTION DATA USING ARTIFICIAL NEURAL NETWORKS TECHNIQUES

    Directory of Open Access Journals (Sweden)

    Giovanni Salini Calderón

    2006-12-01

    concentrations between May and August for years between 1994 and 1996. In order to find the optimal time spacing between data and the number of values into the past necessary to forecast a future value, two standard tests were performed, Average Mutual Information (AMI and False Nearest Neighbours (FNN. The results of these tests suggest that the most convenient choice for modelling was to use 4 data with 6 hour spacing on a given day as input in order to forecast the value at 6 AM on the following day. Once the number and type of input and output variables are fixed, we implemented a forecasting model based on the neural network technique. We used a feedforward multilayer neural network and we trained it with the backpropagation algorithm. We tested networks with none, one and two hidden layers. The best model was one with one hidden layer, in contradiction with a previous study that found that minimum error was obtained with a net without hidden layer. Forecasts with the neural network are more accurate than those produced with a persistence model (the value six hours ahead is the same as the actual value.

  5. Neural method of spatiotemporal filter design

    Science.gov (United States)

    Szostakowski, Jaroslaw

    1997-10-01

    There is a lot of applications in medical imaging, computer vision, and the communications, where the video processing is critical. Although many techniques have been successfully developed for the filtering of the still-images, significantly fewer techniques have been proposed for the filtering of noisy image sequences. In this paper the novel approach to spatio- temporal filtering design is proposed. The multilayer perceptrons and functional-link nets are used for the 3D filtering. The spatio-temporal patterns are creating from real motion video images. The neural networks learn these patterns. The perceptrons with different number of layers and neurons in each layer are tested. Also, the different input functions in functional- link net are searched. The practical examples of the filtering are shown and compared with traditional (non-neural) spatio-temporal methods. The results are very interesting and the neural spatio-temporal filters seems to be very efficient tool for video noise reduction.

  6. Application of Partially Connected Neural Network

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    This paper focuses mainly on application of Partially Connected Backpropagation Neural Network (PCBP) instead of typical Fully Connected Neural Network (FCBP). The initial neural network is fully connected, after training with sample data using cross-entropy as error function, a clustering method is employed to cluster weights between inputs to hidden layer and from hidden to output layer, and connections that are relatively unnecessary are deleted, thus the initial network becomes a PCBP network.Then PCBP can be used in prediction or data mining by training PCBP with data that comes from database. At the end of this paper, several experiments are conducted to illustrate the effects of PCBP using Iris data set.

  7. Character Recognition Using Novel Optoelectronic Neural Network

    Science.gov (United States)

    1993-04-01

    17 2.3.7. Learning rule ................................................................... 18 3. ADALINE ... ADALINE neuron and linear separability which provides a justification for multilayer networks. The MADALINE (many ADALINE ) multi layer network is also...element used In many neural networks (Figure 3.1). The ADALINE functions as an adaptive threshold logic element. In digital Implementation, an input

  8. Learning drifting concepts with neural networks

    NARCIS (Netherlands)

    Biehl, Michael; Schwarze, Holm

    1993-01-01

    The learning of time-dependent concepts with a neural network is studied analytically and numerically. The linearly separable target rule is represented by an N-vector, whose time dependence is modelled by a random or deterministic drift process. A single-layer network is trained online using differ

  9. Fiber optic Adaline neural networks

    Science.gov (United States)

    Ghosh, Anjan K.; Trepka, Jim; Paparao, Palacharla

    1993-02-01

    Optoelectronic realization of adaptive filters and equalizers using fiber optic tapped delay lines and spatial light modulators has been discussed recently. We describe the design of a single layer fiber optic Adaline neural network which can be used as a bit pattern classifier. In our realization we employ as few electronic devices as possible and use optical computation to utilize the advantages of optics in processing speed, parallelism, and interconnection. The new optical neural network described in this paper is designed for optical processing of guided lightwave signals, not electronic signals. We analyzed the convergence or learning characteristics of the optically implemented Adaline in the presence of errors in the hardware, and we studied methods for improving the convergence rate of the Adaline.

  10. A recurrent neural network with exponential convergence for solving convex quadratic program and related linear piecewise equations.

    Science.gov (United States)

    Xia, Youshen; Feng, Gang; Wang, Jun

    2004-09-01

    This paper presents a recurrent neural network for solving strict convex quadratic programming problems and related linear piecewise equations. Compared with the existing neural networks for quadratic program, the proposed neural network has a one-layer structure with a low model complexity. Moreover, the proposed neural network is shown to have a finite-time convergence and exponential convergence. Illustrative examples further show the good performance of the proposed neural network in real-time applications.

  11. Neural Network Inverse Adaptive Controller Based on Davidon Least Square

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    General neural network inverse adaptive controller haa two flaws: the first is the slow convergence speed; the second is the invalidation to the non-minimum phase system.These defects limit the scope in which the neural network inverse adaptive controller is used.We employ Davidon least squares in training the multi-layer feedforward neural network used in approximating the inverse model of plant to expedite the convergence,and then through constructing the pseudo-plant,a neural network inverse adaptive controller is put forward which is still effective to the nonlinear non-minimum phase system.The simulation results show the validity of this scheme.

  12. Neural Tube Defects

    Science.gov (United States)

    Neural tube defects are birth defects of the brain, spine, or spinal cord. They happen in the ... that she is pregnant. The two most common neural tube defects are spina bifida and anencephaly. In ...

  13. Artificial Neural Networks

    OpenAIRE

    Chung-Ming Kuan

    2006-01-01

    Artificial neural networks (ANNs) constitute a class of flexible nonlinear models designed to mimic biological neural systems. In this entry, we introduce ANN using familiar econometric terminology and provide an overview of ANN modeling approach and its implementation methods.

  14. Constructive neural network learning

    OpenAIRE

    Lin, Shaobo; Zeng, Jinshan; Zhang, Xiaoqin

    2016-01-01

    In this paper, we aim at developing scalable neural network-type learning systems. Motivated by the idea of "constructive neural networks" in approximation theory, we focus on "constructing" rather than "training" feed-forward neural networks (FNNs) for learning, and propose a novel FNNs learning system called the constructive feed-forward neural network (CFN). Theoretically, we prove that the proposed method not only overcomes the classical saturation problem for FNN approximation, but also ...

  15. An introduction to bio-inspired artificial neural network architectures.

    Science.gov (United States)

    Fasel, B

    2003-03-01

    In this introduction to artificial neural networks we attempt to give an overview of the most important types of neural networks employed in engineering and explain shortly how they operate and also how they relate to biological neural networks. The focus will mainly be on bio-inspired artificial neural network architectures and specifically to neo-perceptions. The latter belong to the family of convolutional neural networks. Their topology is somewhat similar to the one of the human visual cortex and they are based on receptive fields that allow, in combination with sub-sampling layers, for an improved robustness with regard to local spatial distortions. We demonstrate the application of artificial neural networks to face analysis--a domain we human beings are particularly good at, yet which poses great difficulties for digital computers running deterministic software programs.

  16. READING A NEURAL CODE

    NARCIS (Netherlands)

    BIALEK, W; RIEKE, F; VANSTEVENINCK, RRD; WARLAND, D

    1991-01-01

    Traditional approaches to neural coding characterize the encoding of known stimuli in average neural responses. Organisms face nearly the opposite task - extracting information about an unknown time-dependent stimulus from short segments of a spike train. Here the neural code was characterized from

  17. Neural network segmentation of magnetic resonance images

    Science.gov (United States)

    Frederick, Blaise

    1990-07-01

    Neural networks are well adapted to the task of grouping input patterns into subsets which share some similarity. Moreover once trained they can generalize their classification rules to classify new data sets. Sets of pixel intensities from magnetic resonance (MR) images provide a natural input to a neural network by varying imaging parameters MR images can reflect various independent physical parameters of tissues in their pixel intensities. A neural net can then be trained to classify physically similar tissue types based on sets of pixel intensities resulting from different imaging studies on the same subject. A neural network classifier for image segmentation was implemented on a Sun 4/60 and was tested on the task of classifying tissues of canine head MR images. Four images of a transaxial slice with different imaging sequences were taken as input to the network (three spin-echo images and an inversion recovery image). The training set consisted of 691 representative samples of gray matter white matter cerebrospinal fluid bone and muscle preclassified by a neuroscientist. The network was trained using a fast backpropagation algorithm to derive the decision criteria to classify any location in the image by its pixel intensities and the image was subsequently segmented by the classifier. The classifier''s performance was evaluated as a function of network size number of network layers and length of training. A single layer neural network performed quite well at

  18. Salience-Affected Neural Networks

    CERN Document Server

    Remmelzwaal, Leendert A; Ellis, George F R

    2010-01-01

    We present a simple neural network model which combines a locally-connected feedforward structure, as is traditionally used to model inter-neuron connectivity, with a layer of undifferentiated connections which model the diffuse projections from the human limbic system to the cortex. This new layer makes it possible to model global effects such as salience, at the same time as the local network processes task-specific or local information. This simple combination network displays interactions between salience and regular processing which correspond to known effects in the developing brain, such as enhanced learning as a result of heightened affect. The cortex biases neuronal responses to affect both learning and memory, through the use of diffuse projections from the limbic system to the cortex. Standard ANNs do not model this non-local flow of information represented by the ascending systems, which are a significant feature of the structure of the brain, and although they do allow associational learning with...

  19. The design and analysis of effective and efficient neural networks and their applications

    Energy Technology Data Exchange (ETDEWEB)

    Makovoz, W.V.

    1989-01-01

    A complicated design issue of efficient Multilayer neural networks is addressed, and the perception and similar neural networks are examined. It shows that a three-layer perceptron neural network with specially designed learning algorithms provides an efficient framework to solve an exclusive OR problem using only n {minus} 1 processing elements in the second layer. Two efficient rapidly converging algorithms for any symmetric Boolean function were developed using only n {minus} 1 processing elements in the perceptron neural network and int(n/2) processing elements in the Adaline and perceptron neural network with the stepfunction transfer function. Similar results were obtained for the quasi-symmetric Boolean functions using a linear number of processing elements in perceptron neural networks, Adaline's, and perceptron neural networks with the stepfunction transfer functions. Generalized Boolean functions are discussed and two rapidly converging algorithms are shown for perceptron neural networks, Adaline's, and perceptron neural network with stepfunction transfer function. Many other interesting perceptron neural networks are discussed in the dissertation. Perceptron neural networks are applied to find the largest value of the n inputs. A new perceptron neural network is designed to find the largest value of the n inputs with the minimum number of inputs and the minimum number of layers. New perceptron neural networks are developed to sort n inputs. New, effective and efficient back-propagation Neural networks are designed to sort n inputs. The Sigmoid transfer function was discussed and a generalized Sigmoid function to improve Neural network performance was developed. A modified back-propagation learning algorithm was developed that builds any n input symmetric Boolean function using only int(n/2) processing elements in the second layer.

  20. Human Face Recognition Using Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Răzvan-Daniel Albu

    2009-10-01

    Full Text Available In this paper, I present a novel hybrid face recognition approach based on a convolutional neural architecture, designed to robustly detect highly variable face patterns. The convolutional network extracts successively larger features in a hierarchical set of layers. With the weights of the trained neural networks there are created kernel windows used for feature extraction in a 3-stage algorithm. I present experimental results illustrating the efficiency of the proposed approach. I use a database of 796 images of 159 individuals from Reims University which contains quite a high degree of variability in expression, pose, and facial details.

  1. Generalized Potential of Adult Neural Stem Cells

    Science.gov (United States)

    Clarke, Diana L.; Johansson, Clas B.; Wilbertz, Johannes; Veress, Biborka; Nilsson, Erik; Karlström, Helena; Lendahl, Urban; Frisén, Jonas

    2000-06-01

    The differentiation potential of stem cells in tissues of the adult has been thought to be limited to cell lineages present in the organ from which they were derived, but there is evidence that some stem cells may have a broader differentiation repertoire. We show here that neural stem cells from the adult mouse brain can contribute to the formation of chimeric chick and mouse embryos and give rise to cells of all germ layers. This demonstrates that an adult neural stem cell has a very broad developmental capacity and may potentially be used to generate a variety of cell types for transplantation in different diseases.

  2. Wbp2nl has a developmental role in establishing neural and non-neural ectodermal fates.

    Science.gov (United States)

    Marchak, Alexander; Grant, Paaqua A; Neilson, Karen M; Datta Majumdar, Himani; Yaklichkin, Sergey; Johnson, Diana; Moody, Sally A

    2017-09-01

    In many animals, maternally synthesized mRNAs are critical for primary germ layer formation. In Xenopus, several maternal mRNAs are enriched in the animal blastomere progenitors of the embryonic ectoderm. We previously identified one of these, WW-domain binding protein 2 N-terminal like (wbp2nl), that others previously characterized as a sperm protein (PAWP) that promotes meiotic resumption. Herein we demonstrate that it has an additional developmental role in regionalizing the embryonic ectoderm. Knock-down of Wbp2nl in the dorsal ectoderm reduced cranial placode and neural crest gene expression domains and expanded neural plate domains; knock-down in ventral ectoderm reduced epidermal gene expression. Conversely, increasing levels of Wbp2nl in the neural plate induced ectopic epidermal and neural crest gene expression and repressed many neural plate and cranial placode genes. The effects in the neural plate appear to be mediated, at least in part, by down-regulating chd, a BMP antagonist. Because the cellular function of Wbp2nl is not known, we mutated several predicted motifs. Expressing mutated proteins in embryos showed that a putative phosphorylation site at Thr45 and an α-helix in the PH-G domain are required to ectopically induce epidermal and neural crest genes in the neural plate. An intact YAP-binding motif also is required for ectopic epidermal gene expression as well as for down-regulating chd. This work reveals novel developmental roles for a cytoplasmic protein that promotes epidermal and neural crest formation at the expense of neural ectoderm. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Neural induction and factors that stabilize a neural fate

    OpenAIRE

    Rogers, Crystal; Moody, Sally A.; Casey, Elena

    2009-01-01

    The neural ectoderm of vertebrates forms when the BMP signaling pathway is suppressed. Herein we review the molecules that directly antagonize extracellular BMP and the signaling pathways that further contribute to reduce BMP activity in the neural ectoderm. Downstream of neural induction, a large number of “neural fate stabilizing” (NFS) transcription factors are expressed in the presumptive neural ectoderm, developing neural tube, and ultimately in neural stem cells. Herein we review what i...

  4. Consciousness and neural plasticity

    DEFF Research Database (Denmark)

    In contemporary consciousness studies the phenomenon of neural plasticity has received little attention despite the fact that neural plasticity is of still increased interest in neuroscience. We will, however, argue that neural plasticity could be of great importance to consciousness studies....... If consciousness is related to neural processes it seems, at least prima facie, that the ability of the neural structures to change should be reflected in a theory of this relationship "Neural plasticity" refers to the fact that the brain can change due to its own activity. The brain is not static but rather...... the relation between consciousness and brain functions. If consciousness is connected to specific brain structures (as a function or in identity) what happens to consciousness when those specific underlying structures change? It is therefore possible that the understanding and theories of neural plasticity can...

  5. Natural Language Processing Neural Network Considering Deep Cases

    Science.gov (United States)

    Sagara, Tsukasa; Hagiwara, Masafumi

    In this paper, we propose a novel neural network considering deep cases. It can learn knowledge from natural language documents and can perform recall and inference. Various techniques of natural language processing using Neural Network have been proposed. However, natural language sentences used in these techniques consist of about a few words, and they cannot handle complicated sentences. In order to solve these problems, the proposed network divides natural language sentences into a sentence layer, a knowledge layer, ten kinds of deep case layers and a dictionary layer. It can learn the relations among sentences and among words by dividing sentences. The advantages of the method are as follows: (1) ability to handle complicated sentences; (2) ability to restructure sentences; (3) usage of the conceptual dictionary, Goi-Taikei, as the long term memory in a brain. Two kinds of experiments were carried out by using goo dictionary and Wikipedia as knowledge sources. Superior performance of the proposed neural network has been confirmed.

  6. Brain tumor grading based on Neural Networks and Convolutional Neural Networks.

    Science.gov (United States)

    Yuehao Pan; Weimin Huang; Zhiping Lin; Wanzheng Zhu; Jiayin Zhou; Wong, Jocelyn; Zhongxiang Ding

    2015-08-01

    This paper studies brain tumor grading using multiphase MRI images and compares the results with various configurations of deep learning structure and baseline Neural Networks. The MRI images are used directly into the learning machine, with some combination operations between multiphase MRIs. Compared to other researches, which involve additional effort to design and choose feature sets, the approach used in this paper leverages the learning capability of deep learning machine. We present the grading performance on the testing data measured by the sensitivity and specificity. The results show a maximum improvement of 18% on grading performance of Convolutional Neural Networks based on sensitivity and specificity compared to Neural Networks. We also visualize the kernels trained in different layers and display some self-learned features obtained from Convolutional Neural Networks.

  7. A dendritic lattice neural network for color image segmentation

    Science.gov (United States)

    Urcid, Gonzalo; Lara-Rodríguez, Luis David; López-Meléndez, Elizabeth

    2015-09-01

    A two-layer dendritic lattice neural network is proposed to segment color images in the Red-Green-Blue (RGB) color space. The two layer neural network is a fully interconnected feed forward net consisting of an input layer that receives color pixel values, an intermediate layer that computes pixel interdistances, and an output layer used to classify colors by hetero-association. The two-layer net is first initialized with a finite small subset of the colors present in the input image. These colors are obtained by means of an automatic clustering procedure such as k-means or fuzzy c-means. In the second stage, the color image is scanned on a pixel by pixel basis where each picture element is treated as a vector and feeded into the network. For illustration purposes we use public domain color images to show the performance of our proposed image segmentation technique.

  8. Application of a neural network for reflectance spectrum classification

    Science.gov (United States)

    Yang, Gefei; Gartley, Michael

    2017-05-01

    Traditional reflectance spectrum classification algorithms are based on comparing spectrum across the electromagnetic spectrum anywhere from the ultra-violet to the thermal infrared regions. These methods analyze reflectance on a pixel by pixel basis. Inspired by high performance that Convolution Neural Networks (CNN) have demonstrated in image classification, we applied a neural network to analyze directional reflectance pattern images. By using the bidirectional reflectance distribution function (BRDF) data, we can reformulate the 4-dimensional into 2 dimensions, namely incident direction × reflected direction × channels. Meanwhile, RIT's micro-DIRSIG model is utilized to simulate additional training samples for improving the robustness of the neural networks training. Unlike traditional classification by using hand-designed feature extraction with a trainable classifier, neural networks create several layers to learn a feature hierarchy from pixels to classifier and all layers are trained jointly. Hence, the our approach of utilizing the angular features are different to traditional methods utilizing spatial features. Although training processing typically has a large computational cost, simple classifiers work well when subsequently using neural network generated features. Currently, most popular neural networks such as VGG, GoogLeNet and AlexNet are trained based on RGB spatial image data. Our approach aims to build a directional reflectance spectrum based neural network to help us to understand from another perspective. At the end of this paper, we compare the difference among several classifiers and analyze the trade-off among neural networks parameters.

  9. Chaotic diagonal recurrent neural network

    Institute of Scientific and Technical Information of China (English)

    Wang Xing-Yuan; Zhang Yi

    2012-01-01

    We propose a novel neural network based on a diagonal recurrent neural network and chaos,and its structure andlearning algorithm are designed.The multilayer feedforward neural network,diagonal recurrent neural network,and chaotic diagonal recurrent neural network are used to approach the cubic symmetry map.The simulation results show that the approximation capability of the chaotic diagonal recurrent neural network is better than the other two neural networks.

  10. Nonmixing layers

    Science.gov (United States)

    Gaillard, Pierre; Giovangigli, Vincent; Matuszewski, Lionel

    2016-12-01

    We investigate the impact of nonideal diffusion on the structure of supercritical cryogenic binary mixing layers. This situation is typical of liquid fuel injection in high-pressure rocket engines. Nonideal diffusion has a dramatic impact in the neighborhood of chemical thermodynamic stability limits where the components become quasi-immiscible and ultimately form a nonmixing layer. Numerical simulations are performed for mixing layers of H2 and N2 at a pressure of 100 atm and temperature around 120-150 K near chemical thermodynamic stability limits.

  11. A MULTILAYER COMPLEX NEURAL NETWORK TRAINING ALGORITHM AND ITS APPLICATION IN ADAPTIVE EQUALIZATION

    Institute of Scientific and Technical Information of China (English)

    Li Chunguang; Liao Xiaofeng; Wu Zhongfu; Yu Juebang

    2001-01-01

    In this paper, the layer-by-layer optimizing algorithm for training multilayer neural network is extended for the case of a multilayer neural network whose inputs, weights, and activation functions are all complex. The updating of the weights of each layer in the network is based on the recursive least squares method. The performance of the proposed algorithm is demonstrated with application in adaptive complex communication channel equalization.

  12. Evolvable Neural Software System

    Science.gov (United States)

    Curtis, Steven A.

    2009-01-01

    The Evolvable Neural Software System (ENSS) is composed of sets of Neural Basis Functions (NBFs), which can be totally autonomously created and removed according to the changing needs and requirements of the software system. The resulting structure is both hierarchical and self-similar in that a given set of NBFs may have a ruler NBF, which in turn communicates with other sets of NBFs. These sets of NBFs may function as nodes to a ruler node, which are also NBF constructs. In this manner, the synthetic neural system can exhibit the complexity, three-dimensional connectivity, and adaptability of biological neural systems. An added advantage of ENSS over a natural neural system is its ability to modify its core genetic code in response to environmental changes as reflected in needs and requirements. The neural system is fully adaptive and evolvable and is trainable before release. It continues to rewire itself while on the job. The NBF is a unique, bilevel intelligence neural system composed of a higher-level heuristic neural system (HNS) and a lower-level, autonomic neural system (ANS). Taken together, the HNS and the ANS give each NBF the complete capabilities of a biological neural system to match sensory inputs to actions. Another feature of the NBF is the Evolvable Neural Interface (ENI), which links the HNS and ANS. The ENI solves the interface problem between these two systems by actively adapting and evolving from a primitive initial state (a Neural Thread) to a complicated, operational ENI and successfully adapting to a training sequence of sensory input. This simulates the adaptation of a biological neural system in a developmental phase. Within the greater multi-NBF and multi-node ENSS, self-similar ENI s provide the basis for inter-NBF and inter-node connectivity.

  13. Kannada character recognition system using neural network

    Science.gov (United States)

    Kumar, Suresh D. S.; Kamalapuram, Srinivasa K.; Kumar, Ajay B. R.

    2013-03-01

    Handwriting recognition has been one of the active and challenging research areas in the field of pattern recognition. It has numerous applications which include, reading aid for blind, bank cheques and conversion of any hand written document into structural text form. As there is no sufficient number of works on Indian language character recognition especially Kannada script among 15 major scripts in India. In this paper an attempt is made to recognize handwritten Kannada characters using Feed Forward neural networks. A handwritten Kannada character is resized into 20x30 Pixel. The resized character is used for training the neural network. Once the training process is completed the same character is given as input to the neural network with different set of neurons in hidden layer and their recognition accuracy rate for different Kannada characters has been calculated and compared. The results show that the proposed system yields good recognition accuracy rates comparable to that of other handwritten character recognition systems.

  14. Neural net approach to predictive vector quantization

    Science.gov (United States)

    Mohsenian, Nader; Nasrabadi, Nasser M.

    1992-11-01

    A new predictive vector quantization (PVQ) technique, capable of exploring the nonlinear dependencies in addition to the linear dependencies that exist between adjacent blocks of pixels, is introduced. Two different classes of neural nets form the components of the PVQ scheme. A multi-layer perceptron is embedded in the predictive component of the compression system. This neural network, using the non-linearity condition associated with its processing units, can perform as a non-linear vector predictor. The second component of the PVQ scheme vector quantizes (VQ) the residual vector that is formed by subtracting the output of the perceptron from the original wave-pattern. Kohonen Self-Organizing Feature Map (KSOFM) was utilized as a neural network clustering algorithm to design the codebook for the VQ technique. Coding results are presented for monochrome 'still' images.

  15. Application of artificial neural network and adaptive neuro-fuzzy inference system to investigate corrosion rate of zirconium-based nano-ceramic layer on galvanized steel in 3.5% NaCl solution

    Energy Technology Data Exchange (ETDEWEB)

    Mousavifard, S.M. [Department of Polymer Engineering and Color Technology, Amirkabir University of Technology, Tehran (Iran, Islamic Republic of); Attar, M.M., E-mail: attar@aut.ac.ir [Department of Polymer Engineering and Color Technology, Amirkabir University of Technology, Tehran (Iran, Islamic Republic of); Ghanbari, A. [Department of Polymer Engineering and Color Technology, Amirkabir University of Technology, Tehran (Iran, Islamic Republic of); Dadgar, M. [Textile Engineering Department, Neyshabur University, Neyshabur (Iran, Islamic Republic of)

    2015-08-05

    Highlights: • Film formation of Zr-based conversion coating under different conditions was investigated. • We study the effect of some parameters on anticorrosion performance of conversion coating. • Optimization of processing conditions for surface treatment of galvanized steel was obtained. • Modeling and predicting corrosion current density of treated surfaces was performed using ANN and ANFIS. - Abstract: A nano-ceramic Zr-based conversion solution was prepared and optimization of Zr concentration, pH, temperature and immersion time for the treatment of hot-dip galvanized steel (HDG) was performed. SEM microscopy was utilized to investigate the microstructure and film formation of the layer and the anticorrosion performance of conversion coating was studied using polarization test. Artificial intelligence systems (ANN and ANFIS) were applied on the data obtained from polarization test and the models for predicting corrosion current density values were attained. The outcome of these models showed proper predictability of the methods. The influence of input parameters was discussed and the optimized conditions for Zr-based conversion layer formation on the galvanized steel were obtained as follows: pH 3.8–4.5, Zr concentration of about 100 ppm, ambient temperature and immersion time of about 90 s.

  16. Application of structured support vector machine backpropagation to a convolutional neural network for human pose estimation.

    Science.gov (United States)

    Witoonchart, Peerajak; Chongstitvatana, Prabhas

    2017-08-01

    In this study, for the first time, we show how to formulate a structured support vector machine (SSVM) as two layers in a convolutional neural network, where the top layer is a loss augmented inference layer and the bottom layer is the normal convolutional layer. We show that a deformable part model can be learned with the proposed structured SVM neural network by backpropagating the error of the deformable part model to the convolutional neural network. The forward propagation calculates the loss augmented inference and the backpropagation calculates the gradient from the loss augmented inference layer to the convolutional layer. Thus, we obtain a new type of convolutional neural network called an Structured SVM convolutional neural network, which we applied to the human pose estimation problem. This new neural network can be used as the final layers in deep learning. Our method jointly learns the structural model parameters and the appearance model parameters. We implemented our method as a new layer in the existing Caffe library. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Neural network approach for differential diagnosis of interstitial lung diseases

    Science.gov (United States)

    Asada, Naoki; Doi, Kunio; MacMahon, Heber; Montner, Steven M.; Giger, Maryellen L.; Abe, Chihiro; Wu, Chris Y.

    1990-07-01

    A neural network approach was applied for the differential diagnosis of interstitial lung diseases. The neural network was designed for distinguishing between 9 types of interstitial lung diseases based on 20 items of clinical and radiographic information. A database for training and testing the neural network was created with 10 hypothetical cases for each of the 9 diseases. The performance of the neural network was evaluated by ROC analysis. The optimal parameters for the current neural network were determined by selecting those yielding the highest ROC curves. In this case the neural network consisted of one hidden layer including 6 units and was trained with 200 learning iterations. When the decision performances of the neural network chest radiologists and senior radiology residents were compared the neural network indicated high performance comparable to that of chest radiologists and superior to that of senior radiology residents. Our preliminary results suggested strongly that the neural network approach had potential utility in the computer-aided differential diagnosis of interstitial lung diseases. 1_

  18. Patterns of synchrony for feed-forward and auto-regulation feed-forward neural networks.

    Science.gov (United States)

    Aguiar, Manuela A D; Dias, Ana Paula S; Ferreira, Flora

    2017-01-01

    We consider feed-forward and auto-regulation feed-forward neural (weighted) coupled cell networks. In feed-forward neural networks, cells are arranged in layers such that the cells of the first layer have empty input set and cells of each other layer receive only inputs from cells of the previous layer. An auto-regulation feed-forward neural coupled cell network is a feed-forward neural network where additionally some cells of the first layer have auto-regulation, that is, they have a self-loop. Given a network structure, a robust pattern of synchrony is a space defined in terms of equalities of cell coordinates that is flow-invariant for any coupled cell system (with additive input structure) associated with the network. In this paper, we describe the robust patterns of synchrony for feed-forward and auto-regulation feed-forward neural networks. Regarding feed-forward neural networks, we show that only cells in the same layer can synchronize. On the other hand, in the presence of auto-regulation, we prove that cells in different layers can synchronize in a robust way and we give a characterization of the possible patterns of synchrony that can occur for auto-regulation feed-forward neural networks.

  19. A neural flow estimator

    DEFF Research Database (Denmark)

    Jørgensen, Ivan Harald Holger; Bogason, Gudmundur; Bruun, Erik

    1995-01-01

    This paper proposes a new way to estimate the flow in a micromechanical flow channel. A neural network is used to estimate the delay of random temperature fluctuations induced in a fluid. The design and implementation of a hardware efficient neural flow estimator is described. The system...... is implemented using switched-current technique and is capable of estimating flow in the μl/s range. The neural estimator is built around a multiplierless neural network, containing 96 synaptic weights which are updated using the LMS1-algorithm. An experimental chip has been designed that operates at 5 V...

  20. Neural Systems Laboratory

    Data.gov (United States)

    Federal Laboratory Consortium — As part of the Electrical and Computer Engineering Department and The Institute for System Research, the Neural Systems Laboratory studies the functionality of the...

  1. Applying Artificial Neural Networks for Face Recognition

    Directory of Open Access Journals (Sweden)

    Thai Hoang Le

    2011-01-01

    Full Text Available This paper introduces some novel models for all steps of a face recognition system. In the step of face detection, we propose a hybrid model combining AdaBoost and Artificial Neural Network (ABANN to solve the process efficiently. In the next step, labeled faces detected by ABANN will be aligned by Active Shape Model and Multi Layer Perceptron. In this alignment step, we propose a new 2D local texture model based on Multi Layer Perceptron. The classifier of the model significantly improves the accuracy and the robustness of local searching on faces with expression variation and ambiguous contours. In the feature extraction step, we describe a methodology for improving the efficiency by the association of two methods: geometric feature based method and Independent Component Analysis method. In the face matching step, we apply a model combining many Neural Networks for matching geometric features of human face. The model links many Neural Networks together, so we call it Multi Artificial Neural Network. MIT + CMU database is used for evaluating our proposed methods for face detection and alignment. Finally, the experimental results of all steps on CallTech database show the feasibility of our proposed model.

  2. Autonomous robot behavior based on neural networks

    Science.gov (United States)

    Grolinger, Katarina; Jerbic, Bojan; Vranjes, Bozo

    1997-04-01

    The purpose of autonomous robot is to solve various tasks while adapting its behavior to the variable environment, expecting it is able to navigate much like a human would, including handling uncertain and unexpected obstacles. To achieve this the robot has to be able to find solution to unknown situations, to learn experienced knowledge, that means action procedure together with corresponding knowledge on the work space structure, and to recognize working environment. The planning of the intelligent robot behavior presented in this paper implements the reinforcement learning based on strategic and random attempts for finding solution and neural network approach for memorizing and recognizing work space structure (structural assignment problem). Some of the well known neural networks based on unsupervised learning are considered with regard to the structural assignment problem. The adaptive fuzzy shadowed neural network is developed. It has the additional shadowed hidden layer, specific learning rule and initialization phase. The developed neural network combines advantages of networks based on the Adaptive Resonance Theory and using shadowed hidden layer provides ability to recognize lightly translated or rotated obstacles in any direction.

  3. Neural crest cell evolution: how and when did a neural crest cell become a neural crest cell.

    Science.gov (United States)

    Muñoz, William A; Trainor, Paul A

    2015-01-01

    As vertebrates evolved from protochordates, they shifted to a more predatory lifestyle, and radiated and adapted to most niches of the planet. This process was largely facilitated by the generation of novel vertebrate head structures, which were derived from neural crest cells (NCC). The neural crest is a unique vertebrate cell population that is frequently termed the "fourth germ layer" because it forms in conjunction with the other germ layers and contributes to a diverse array of cell types and tissues including the craniofacial skeleton, the peripheral nervous system, and pigment cells among many other tissues and cell types. NCC are defined by their origin at the neural plate border, via an epithelial-to-mesenchymal transition (EMT), together with multipotency and polarized patterns of migration. These defining characteristics, which evolved independently in the germ layers of invertebrates, were subsequently co-opted through their gene regulatory networks to form NCC in vertebrates. Moreover, recent data suggest that the ability to undergo an EMT was one of the latter features co-opted by NCC. In this review, we discuss the potential origins of NCC and how they evolved to contribute to nearly all tissues and organs throughout the body, based on paleontological evidence together with an evaluation of the evolution of molecules involved in NCC development and their migratory cell paths.

  4. Stacked Heterogeneous Neural Networks for Time Series Forecasting

    Directory of Open Access Journals (Sweden)

    Florin Leon

    2010-01-01

    Full Text Available A hybrid model for time series forecasting is proposed. It is a stacked neural network, containing one normal multilayer perceptron with bipolar sigmoid activation functions, and the other with an exponential activation function in the output layer. As shown by the case studies, the proposed stacked hybrid neural model performs well on a variety of benchmark time series. The combination of weights of the two stack components that leads to optimal performance is also studied.

  5. A novel fuzzy neural network and its approximation capability

    Institute of Scientific and Technical Information of China (English)

    刘普寅

    2001-01-01

    The polygonal fuzzy numbers are employed to define a new fuzzy arithmetic. A novel extension principle is also introduced for the increasing function σ: R→R. Thus it is convenient to construct a fuzzy neural network model with succinct learning algorithms. Such a system possesses some universal approximation capabilities, that is, the corresponding three layer feedforward fuzzy neural networks can be universal approximators to the continuously increasing fuzzy functions.

  6. Universal approximation in p-mean by neural networks

    NARCIS (Netherlands)

    Burton, R.M; Dehling, H.G

    A feedforward neural net with d input neurons and with a single hidden layer of n neurons is given by [GRAPHICS] where a(j), theta(j), w(ji) is an element of R. In this paper we study the approximation of arbitrary functions f: R-d --> R by a neural net in an L-p(mu) norm for some finite measure mu

  7. Application of neural networks to unsteady aerodynamic control

    Science.gov (United States)

    Faller, William E.; Schreck, Scott J.; Luttges, Marvin W.

    1994-01-01

    The problem under consideration in this viewgraph presentation is to understand, predict, and control the fluid mechanics of dynamic maneuvers, unsteady boundary layers, and vortex dominated flows. One solution is the application of neural networks demonstrating closed-loop control. Neural networks offer unique opportunities: simplify modeling of three dimensional, vortex dominated, unsteady separated flow fields; are effective means for controlling unsteady aerodynamics; and address integration of sensors, controllers, and time lags into adaptive control systems.

  8. Mobility Prediction in Wireless Ad Hoc Networks using Neural Networks

    CERN Document Server

    Kaaniche, Heni

    2010-01-01

    Mobility prediction allows estimating the stability of paths in a mobile wireless Ad Hoc networks. Identifying stable paths helps to improve routing by reducing the overhead and the number of connection interruptions. In this paper, we introduce a neural network based method for mobility prediction in Ad Hoc networks. This method consists of a multi-layer and recurrent neural network using back propagation through time algorithm for training.

  9. Research on Artificial Neural Network Method for Credit Application

    Institute of Scientific and Technical Information of China (English)

    MingxingLi; PingHeng; PeiwuDong

    2004-01-01

    Considering our country's present situation, in this paper we provide ten evaluation indexes of the credit application management, which is used as the input vector of neural network. Then we set up a three-layer back propagation model for the credit application evaluation based on the artificial neural network. We also analyzed the model using the real data; the testing result indicates that the model is a good method and a good tool.

  10. Multi-column Deep Neural Networks for Image Classification

    OpenAIRE

    Cireşan, Dan; Meier, Ueli; Schmidhuber, Juergen

    2012-01-01

    Traditional methods of computer vision and machine learning cannot match human performance on tasks such as the recognition of handwritten digits or traffic signs. Our biologically plausible deep artificial neural network architectures can. Small (often minimal) receptive fields of convolutional winner-take-all neurons yield large network depth, resulting in roughly as many sparsely connected neural layers as found in mammals between retina and visual cortex. Only winner neurons are trained. ...

  11. Investigation of efficient features for image recognition by neural networks.

    Science.gov (United States)

    Goltsev, Alexander; Gritsenko, Vladimir

    2012-04-01

    In the paper, effective and simple features for image recognition (named LiRA-features) are investigated in the task of handwritten digit recognition. Two neural network classifiers are considered-a modified 3-layer perceptron LiRA and a modular assembly neural network. A method of feature selection is proposed that analyses connection weights formed in the preliminary learning process of a neural network classifier. In the experiments using the MNIST database of handwritten digits, the feature selection procedure allows reduction of feature number (from 60 000 to 7000) preserving comparable recognition capability while accelerating computations. Experimental comparison between the LiRA perceptron and the modular assembly neural network is accomplished, which shows that recognition capability of the modular assembly neural network is somewhat better.

  12. Neural network for constrained nonsmooth optimization using Tikhonov regularization.

    Science.gov (United States)

    Qin, Sitian; Fan, Dejun; Wu, Guangxi; Zhao, Lijun

    2015-03-01

    This paper presents a one-layer neural network to solve nonsmooth convex optimization problems based on the Tikhonov regularization method. Firstly, it is shown that the optimal solution of the original problem can be approximated by the optimal solution of a strongly convex optimization problems. Then, it is proved that for any initial point, the state of the proposed neural network enters the equality feasible region in finite time, and is globally convergent to the unique optimal solution of the related strongly convex optimization problems. Compared with the existing neural networks, the proposed neural network has lower model complexity and does not need penalty parameters. In the end, some numerical examples and application are given to illustrate the effectiveness and improvement of the proposed neural network. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Chinese word sense disambiguation based on neural networks

    Institute of Scientific and Technical Information of China (English)

    LIU Ting; LU Zhi-mao; LANG Jun; LI Sheng

    2005-01-01

    The input of a network is the key problem for Chinese word sense disambiguation utilizing the neural network. This paper presents an input model of the neural network that calculates the mutual information between contextual words and the ambiguous word by using statistical methodology and taking the contextual words of a certain number beside the ambiguous word according to ( - M, + N). The experiment adopts triple-layer BP Neural Network model and proves how the size of a training set and the value of M and N affect the performance of the Neural Network Model. The experimental objects are six pseudowords owning three word-senses constructed according to certain principles. The tested accuracy of our approach on a closed-corpus reaches 90. 31% ,and 89. 62% on an open-corpus. The experiment proves that the Neural Network Model has a good performance on Word Sense Disambiguation.

  14. Weather forecasting based on hybrid neural model

    Science.gov (United States)

    Saba, Tanzila; Rehman, Amjad; AlGhamdi, Jarallah S.

    2017-02-01

    Making deductions and expectations about climate has been a challenge all through mankind's history. Challenges with exact meteorological directions assist to foresee and handle problems well in time. Different strategies have been investigated using various machine learning techniques in reported forecasting systems. Current research investigates climate as a major challenge for machine information mining and deduction. Accordingly, this paper presents a hybrid neural model (MLP and RBF) to enhance the accuracy of weather forecasting. Proposed hybrid model ensure precise forecasting due to the specialty of climate anticipating frameworks. The study concentrates on the data representing Saudi Arabia weather forecasting. The main input features employed to train individual and hybrid neural networks that include average dew point, minimum temperature, maximum temperature, mean temperature, average relative moistness, precipitation, normal wind speed, high wind speed and average cloudiness. The output layer composed of two neurons to represent rainy and dry weathers. Moreover, trial and error approach is adopted to select an appropriate number of inputs to the hybrid neural network. Correlation coefficient, RMSE and scatter index are the standard yard sticks adopted for forecast accuracy measurement. On individual standing MLP forecasting results are better than RBF, however, the proposed simplified hybrid neural model comes out with better forecasting accuracy as compared to both individual networks. Additionally, results are better than reported in the state of art, using a simple neural structure that reduces training time and complexity.

  15. Using neural networks to describe tracer correlations

    Directory of Open Access Journals (Sweden)

    D. J. Lary

    2004-01-01

    Full Text Available Neural networks are ideally suited to describe the spatial and temporal dependence of tracer-tracer correlations. The neural network performs well even in regions where the correlations are less compact and normally a family of correlation curves would be required. For example, the CH4-N2O correlation can be well described using a neural network trained with the latitude, pressure, time of year, and methane volume mixing ratio (v.m.r.. In this study a neural network using Quickprop learning and one hidden layer with eight nodes was able to reproduce the CH4-N2O correlation with a correlation coefficient between simulated and training values of 0.9995. Such an accurate representation of tracer-tracer correlations allows more use to be made of long-term datasets to constrain chemical models. Such as the dataset from the Halogen Occultation Experiment (HALOE which has continuously observed CH4  (but not N2O from 1991 till the present. The neural network Fortran code used is available for download.

  16. Form Analysis by Neural Classification of Cells

    OpenAIRE

    Belaïd, Yolande; Belaïd, Abdel

    1999-01-01

    The original publication is available at www.springerlink.com/www.springerlink.com; Our aim in this paper is to present a methodology for linearly combining multi neural classifier for cell analysis of forms. Features used for the classification are relative to the text orientation and to its character morphology. Eight classes are extracted among numeric, alphabetic, vertical, horizontal, capitals, etc. Classifiers are multi-layered perceptrons considering firstly global features and refinin...

  17. Neural Networks: Implementations and Applications

    NARCIS (Netherlands)

    Vonk, E.; Veelenturf, L.P.J.; Jain, L.C.

    1996-01-01

    Artificial neural networks, also called neural networks, have been used successfully in many fields including engineering, science and business. This paper presents the implementation of several neural network simulators and their applications in character recognition and other engineering areas

  18. Neural Networks: Implementations and Applications

    NARCIS (Netherlands)

    Vonk, E.; Veelenturf, L.P.J.; Jain, L.C.

    1996-01-01

    Artificial neural networks, also called neural networks, have been used successfully in many fields including engineering, science and business. This paper presents the implementation of several neural network simulators and their applications in character recognition and other engineering areas

  19. What Are Neural Tube Defects?

    Science.gov (United States)

    ... NICHD Research Information Clinical Trials Resources and Publications Neural Tube Defects (NTDs): Condition Information Skip sharing on social media links Share this: Page Content What are neural tube defects? Neural (pronounced NOOR-uhl ) tube defects are ...

  20. Stimulated Deep Neural Network for Speech Recognition

    Science.gov (United States)

    2016-09-08

    bot- tleneck feature [21] on 11 Babel languages was used. It was processed by both side-level CMN & CVN and the DNN used it in a temporal context window...seconds, respectively. Decoding was performed with the RT04 tri-gram language model [19]. The adaptation schemes were evaluated in a rapid utterance-level... temporal context window of 9 frames as the input feature. The neural network consisted of 5 hidden layers with 1024 nodes in each layer and the context

  1. A hybrid neural network model for consciousness

    Institute of Scientific and Technical Information of China (English)

    蔺杰; 金小刚; 杨建刚

    2004-01-01

    A new framework for consciousness is introduced based upon traditional artificial neural network models. This framework reflects explicit connections between two parts of the brain: one global working memory and distributed modular cerebral networks relating to specific brain functions. Accordingly this framework is composed of three layers,physical mnemonic layer and abstract thinking layer,which cooperate together through a recognition layer to accomplish information storage and cognition using algorithms of how these interactions contribute to consciousness:(1)the reception process whereby cerebral subsystems group distributed signals into coherent object patterns;(2)the partial recognition process whereby patterns from particular subsystems are compared or stored as knowledge; and(3)the resonant learning process whereby global workspace stably adjusts its structure to adapt to patterns' changes. Using this framework,various sorts of human actions can be explained,leading to a general approach for analyzing brain functions.

  2. A hybrid neural network model for consciousness

    Institute of Scientific and Technical Information of China (English)

    蔺杰; 金小刚; 杨建刚

    2004-01-01

    A new framework for consciousness is introduced based upon traditional artificial neural network models. This framework reflects explicit connections between two parts of the brain: one global working memory and distributed modular cerebral networks relating to specific brain functions. Accordingly this framework is composed of three layers, physical mnemonic layer and abstract thinking layer, which cooperate together through a recognition layer to accomplish information storage and cognition using algorithms of how these interactions contribute to consciousness: (l) the reception process whereby cerebral subsystems group distributed signals into coherent object patterns; (2) the partial recognition process whereby patterns from particular subsystems are compared or stored as knowledge; and (3) the resonant learning process whereby global workspace stably adjusts its structure to adapt to patterns' changes. Using this framework, various sorts of human actions can be explained, leading to a general approach for analyzing brain functions.

  3. NEURAL NETWORK TRAINING WITH PARALLEL PARTICLE SWARM OPTIMIZER

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Feed forward neural net works such as multi-layer perceptron,radial basis function neural net-works,have been widely applied to classification,function approxi mation and data mining.Evolu-tionary computation has been explored to train neu-ral net works as a very promising and competitive al-ternative learning method,because it has potentialto produce global mini mum in the weight space.Recently,an emerging evolutionary computationtechnique,Particle Swar m Opti mization(PSO)be-comes a hot topic because of i...

  4. Impact of Mutation Weights on Training Backpropagation Neural Networks

    Directory of Open Access Journals (Sweden)

    Lamia Abed Noor Muhammed

    2014-07-01

    Full Text Available Neural network is a computational approach, which based on the simulation of biology neural network. This approach is conducted by several parameters; learning rate, initialized weights, network architecture, and so on. However, this paper would be focused on one of these parameters that is weights. The aim is to shed lights on the mutation weights through training network and its effects on the results. The experiment was done using backpropagation neural network with one hidden layer. The results reveal the role of mutation in escape from the local minima and making the change

  5. APPROACH TO FAULT ON-LINE DETECTION AND DIAGNOSIS BASED ON NEURAL NETWORKS FOR ROBOT IN FMS

    Institute of Scientific and Technical Information of China (English)

    1998-01-01

    Based on radial basis function (RBF) neural networks, the healthy working model of each sub-system of robot in FMS is established. A new approach to fault on-line detection and diagnosis according to neural networks model is presented. Fault double detection based on neural network model and threshold judgement and quick fault identification based on multi-layer feedforward neural networks are applied, which can meet quickness and reliability of fault detection and diagnosis for robot in FMS.

  6. Kunstige neurale net

    DEFF Research Database (Denmark)

    Hørning, Annette

    1994-01-01

    Artiklen beskæftiger sig med muligheden for at anvende kunstige neurale net i forbindelse med datamatisk procession af naturligt sprog, specielt automatisk talegenkendelse.......Artiklen beskæftiger sig med muligheden for at anvende kunstige neurale net i forbindelse med datamatisk procession af naturligt sprog, specielt automatisk talegenkendelse....

  7. Critical Branching Neural Networks

    Science.gov (United States)

    Kello, Christopher T.

    2013-01-01

    It is now well-established that intrinsic variations in human neural and behavioral activity tend to exhibit scaling laws in their fluctuations and distributions. The meaning of these scaling laws is an ongoing matter of debate between isolable causes versus pervasive causes. A spiking neural network model is presented that self-tunes to critical…

  8. Consciousness and neural plasticity

    DEFF Research Database (Denmark)

    changes or to abandon the strong identity thesis altogether. Were one to pursue a theory according to which consciousness is not an epiphenomenon to brain processes, consciousness may in fact affect its own neural basis. The neural correlate of consciousness is often seen as a stable structure, that is...

  9. Critical Branching Neural Networks

    Science.gov (United States)

    Kello, Christopher T.

    2013-01-01

    It is now well-established that intrinsic variations in human neural and behavioral activity tend to exhibit scaling laws in their fluctuations and distributions. The meaning of these scaling laws is an ongoing matter of debate between isolable causes versus pervasive causes. A spiking neural network model is presented that self-tunes to critical…

  10. Synchronization in networks with multiple interaction layers

    CERN Document Server

    del Genio, Charo I; Bonamassa, Ivan; Boccaletti, Stefano

    2016-01-01

    The structure of many real-world systems is best captured by networks consisting of several interaction layers. Understanding how a multi-layered structure of connections affects the synchronization properties of dynamical systems evolving on top of it is a highly relevant endeavour in mathematics and physics, and has potential applications to several societally relevant topics, such as power grids engineering and neural dynamics. We propose a general framework to assess stability of the synchronized state in networks with multiple interaction layers, deriving a necessary condition that generalizes the Master Stability Function approach. We validate our method applying it to a network of R\\"ossler oscillators with a double layer of interactions, and show that highly rich phenomenology emerges. This includes cases where the stability of synchronization can be induced even if both layers would have individually induced unstable synchrony, an effect genuinely due to the true multi-layer structure of the interact...

  11. Is neural Darwinism Darwinism?

    Science.gov (United States)

    van Belle, T

    1997-01-01

    Neural Darwinism is a theory of cognition developed by Gerald Edelman along with George Reeke and Olaf Sporns at Rockefeller University. As its name suggests, neural Darwinism is modeled after biological Darwinism, and its authors assert that the two processes are strongly analogous. both operate on variation in a population, amplifying the more adaptive individuals. However, from a computational perspective, neural Darwinism is quite different from other models of natural selection, such as genetic algorithms. The individuals of neural Darwinism do not replicate, thus robbing the process of the capacity to explore new solutions over time and ultimately reducing it to a random search. Because neural Darwinism does not have the computational power of a truly Darwinian process, it is misleading to label it as such. to illustrate this disparity in adaptive power, one of Edelman's early computer experiments, Darwin I, is revisited, and it is shown that adding replication greatly improves the adaptive power of the system.

  12. Neural tube closure in Xenopus laevis involves medial migration, directed protrusive activity, cell intercalation and convergent extension.

    Science.gov (United States)

    Davidson, L A; Keller, R E

    1999-10-01

    We have characterized the cell movements and prospective cell identities as neural folds fuse during neural tube formation in Xenopus laevis. A newly developed whole-mount, two-color fluorescent RNA in situ hybridization method, visualized with confocal microscopy, shows that the dorsal neural tube gene xpax3 and the neural-crest-specific gene xslug are expressed far lateral to the medial site of neural fold fusion and that expression moves medially after fusion. To determine whether cell movements or dynamic changes in gene expression are responsible, we used low-light videomicroscopy followed by fluorescent in situ and confocal microscopy. These methods revealed that populations of prospective neural crest and dorsal neural tube cells near the lateral margin of the neural plate at the start of neurulation move to the dorsal midline using distinctive forms of motility. Before fold fusion, superficial neural cells apically contract, roll the neural plate into a trough and appear to pull the superficial epidermal cell sheet medially. After neural fold fusion, lateral deep neural cells move medially by radially intercalating between other neural cells using two types of motility. The neural crest cells migrate as individual cells toward the dorsal midline using medially directed monopolar protrusions. These movements combine the two lateral populations of neural crest into a single medial population that form the roof of the neural tube. The remaining cells of the dorsal neural tube extend protrusions both medially and laterally bringing about radial intercalation of deep and superficial cells to form a single-cell-layered, pseudostratified neural tube. While ours is the first description of medially directed cell migration during neural fold fusion and re-establishment of the neural tube, these complex cell behaviors may be involved during cavitation of the zebrafish neural keel and secondary neurulation in the posterior axis of chicken and mouse.

  13. Parameterizing Stellar Spectra Using Deep Neural Networks

    Science.gov (United States)

    Li, Xiang-Ru; Pan, Ru-Yang; Duan, Fu-Qing

    2017-03-01

    Large-scale sky surveys are observing massive amounts of stellar spectra. The large number of stellar spectra makes it necessary to automatically parameterize spectral data, which in turn helps in statistically exploring properties related to the atmospheric parameters. This work focuses on designing an automatic scheme to estimate effective temperature ({T}{eff}), surface gravity ({log}g) and metallicity [Fe/H] from stellar spectra. A scheme based on three deep neural networks (DNNs) is proposed. This scheme consists of the following three procedures: first, the configuration of a DNN is initialized using a series of autoencoder neural networks; second, the DNN is fine-tuned using a gradient descent scheme; third, three atmospheric parameters {T}{eff}, {log}g and [Fe/H] are estimated using the computed DNNs. The constructed DNN is a neural network with six layers (one input layer, one output layer and four hidden layers), for which the number of nodes in the six layers are 3821, 1000, 500, 100, 30 and 1, respectively. This proposed scheme was tested on both real spectra and theoretical spectra from Kurucz’s new opacity distribution function models. Test errors are measured with mean absolute errors (MAEs). The errors on real spectra from the Sloan Digital Sky Survey (SDSS) are 0.1477, 0.0048 and 0.1129 dex for {log}g, {log}{T}{eff} and [Fe/H] (64.85 K for {T}{eff}), respectively. Regarding theoretical spectra from Kurucz’s new opacity distribution function models, the MAE of the test errors are 0.0182, 0.0011 and 0.0112 dex for {log}g, {log}{T}{eff} and [Fe/H] (14.90 K for {T}{eff}), respectively.

  14. Neural Networks for Emotion Classification

    CERN Document Server

    Sun, Yafei

    2011-01-01

    It is argued that for the computer to be able to interact with humans, it needs to have the communication skills of humans. One of these skills is the ability to understand the emotional state of the person. This thesis describes a neural network-based approach for emotion classification. We learn a classifier that can recognize six basic emotions with an average accuracy of 77% over the Cohn-Kanade database. The novelty of this work is that instead of empirically selecting the parameters of the neural network, i.e. the learning rate, activation function parameter, momentum number, the number of nodes in one layer, etc. we developed a strategy that can automatically select comparatively better combination of these parameters. We also introduce another way to perform back propagation. Instead of using the partial differential of the error function, we use optimal algorithm; namely Powell's direction set to minimize the error function. We were also interested in construction an authentic emotion databases. This...

  15. Some neural effects of adenosin.

    Science.gov (United States)

    Haulică, I; Brănişteanu, D D; Petrescu, G H

    1978-01-01

    The possible neural effects of adenosine were investigated by using electrophysiological techniques at the level of some central and peripheral synapses. The evoked potentials in the somatosensorial cerebral cortex are influenced according to both the type of administration and the level of the electrical stimulation. While the local application does not induce significant alterations, the intrathalamic injections and the perfusion of the IIIrd cerebral ventricle do change the distribution of activated units at the level of different cortical layers especially during the peripheral stimulation. The frequency of spontaneous miniature discharges intracellularly recorded in the neuromuscular junction (mepp) is significantly depressed by adenosine. This effect is calcium- and dose-dependent. The end plate potentials (EPP) were also depressed. The statistical binomial analysis of the phenomenon indicated that adenosine induces a decrease if the presynaptic pool of the available transmitter. The data obtained demonstrate a presynaptic inhibitory action of adenosine beside its known vascular and metaholic effects.

  16. Dynamics of neural cryptography.

    Science.gov (United States)

    Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido

    2007-05-01

    Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently, synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible.

  17. Patterns of neural differentiation in melanomas

    Directory of Open Access Journals (Sweden)

    Singh Avantika V

    2010-11-01

    Full Text Available Abstract Background Melanomas, highly malignant tumors arise from the melanocytes which originate as multipotent neural crest cells during neural tube genesis. The purpose of this study is to assess the pattern of neural differentiation in relation to angiogenesis in VGP melanomas using the tumor as a three dimensional system. Methods Tumor-vascular complexes [TVC] are formed at the tumor-stroma interphase, by tumor cells ensheathing angiogenic vessels to proliferate into a mantle of 5 to 6 layers [L1 to L5] forming a perivascular mantle zone [PMZ]. The pattern of neural differentiation is assessed by immunopositivity for HMB45, GFAP, NFP and synaptophysin has been compared in: [a] the general tumor [b] tumor-vascular complexes and [c] perimantle zone [PC] on serial frozen and paraffin sections. Statistical Analysis: ANOVA: Kruskal-Wallis One Way Analysis of Variance; All Pairwise Multiple Comparison Procedures [Tukey Test]. Results The cells abutting on the basement membrane acquire GFAP positivity and extend processes. New layers of tumor cells show a transition between L2 to L3 followed by NFP and Syn positivity in L4&L5. The level of GFAP+vity in L1&L2 directly proportionate to the percentage of NFP/Syn+vity in L4&L5, on comparing pigmented PMZ with poorly pigmented PMZ. Tumor cells in the perimantle zone show high NFP [65%] and Syn [35.4%] positivity with very low GFAP [6.9%] correlating with the positivity in the outer layers. Discussion From this study it is seen that melanoma cells revert to the embryonic pattern of differentiation, with radial glial like cells [GFAP+ve] which further differentiate into neuronal positive cells [NFP&Syn+ve] during angiogenic tumor-vascular interaction, as seen during neurogenesis, to populate the tumor substance.

  18. F77NNS - A FORTRAN-77 NEURAL NETWORK SIMULATOR

    Science.gov (United States)

    Mitchell, P. H.

    1994-01-01

    F77NNS (A FORTRAN-77 Neural Network Simulator) simulates the popular back error propagation neural network. F77NNS is an ANSI-77 FORTRAN program designed to take advantage of vectorization when run on machines having this capability, but it will run on any computer with an ANSI-77 FORTRAN Compiler. Artificial neural networks are formed from hundreds or thousands of simulated neurons, connected to each other in a manner similar to biological nerve cells. Problems which involve pattern matching or system modeling readily fit the class of problems which F77NNS is designed to solve. The program's formulation trains a neural network using Rumelhart's back-propagation algorithm. Typically the nodes of a network are grouped together into clumps called layers. A network will generally have an input layer through which the various environmental stimuli are presented to the network, and an output layer for determining the network's response. The number of nodes in these two layers is usually tied to features of the problem being solved. Other layers, which form intermediate stops between the input and output layers, are called hidden layers. The back-propagation training algorithm can require massive computational resources to implement a large network such as a network capable of learning text-to-phoneme pronunciation rules as in the famous Sehnowski experiment. The Sehnowski neural network learns to pronounce 1000 common English words. The standard input data defines the specific inputs that control the type of run to be made, and input files define the NN in terms of the layers and nodes, as well as the input/output (I/O) pairs. The program has a restart capability so that a neural network can be solved in stages suitable to the user's resources and desires. F77NNS allows the user to customize the patterns of connections between layers of a network. The size of the neural network to be solved is limited only by the amount of random access memory (RAM) available to the

  19. ANT Advanced Neural Tool

    Energy Technology Data Exchange (ETDEWEB)

    Labrador, I.; Carrasco, R.; Martinez, L.

    1996-07-01

    This paper describes a practical introduction to the use of Artificial Neural Networks. Artificial Neural Nets are often used as an alternative to the traditional symbolic manipulation and first order logic used in Artificial Intelligence, due the high degree of difficulty to solve problems that can not be handled by programmers using algorithmic strategies. As a particular case of Neural Net a Multilayer Perception developed by programming in C language on OS9 real time operating system is presented. A detailed description about the program structure and practical use are included. Finally, several application examples that have been treated with the tool are presented, and some suggestions about hardware implementations. (Author) 15 refs.

  20. Application of Artificial Neural Network in Active Vibration Control of Diesel Engine

    Institute of Scientific and Technical Information of China (English)

    SUN Cheng-shun; ZHANG Jian-wu

    2005-01-01

    Artificial Neural Network (ANN) is applied to diesel twostage vibration isolating system and an AVC (Active Vibration Control) system is developed. Both identifier and controller are constructed by three-layer BP neural network. Besides computer simulation, experiment research is carried out on both analog bench and diesel bench. The results of simulation and experiment show a diminished response of vibration.

  1. Robust nonlinear autoregressive moving average model parameter estimation using stochastic recurrent artificial neural networks

    DEFF Research Database (Denmark)

    Chon, K H; Hoyer, D; Armoundas, A A;

    1999-01-01

    part of the stochastic ARMA model are first estimated via a three-layer artificial neural network (deterministic estimation step) and then reestimated using the prediction error as one of the inputs to the artificial neural networks in an iterative algorithm (stochastic estimation step). The prediction...

  2. Artificial Neural Networks for Nonlinear Dynamic Response Simulation in Mechanical Systems

    DEFF Research Database (Denmark)

    Christiansen, Niels Hørbye; Høgsberg, Jan Becker; Winther, Ole

    2011-01-01

    It is shown how artificial neural networks can be trained to predict dynamic response of a simple nonlinear structure. Data generated using a nonlinear finite element model of a simplified wind turbine is used to train a one layer artificial neural network. When trained properly the network is able...

  3. Temporal and spacial changes of highly polysialylated neural cell adhesion molecule immunoreactivity in amygdala kindling development.

    Science.gov (United States)

    Sato, K; Iwai, M; Nagano, I; Shoji, M; Abe, K

    2003-01-01

    To investigate the migration of neural stem cells as well as neural plastic changes in epileptic brain, spaciotemporal expression of immunoreactive highly polysialylated neural cell adhesion molecule (PSA-NCAM) was examined in amygdala kindling development of rat. The neural migration and synaptic remodeling detected with PSA-NCAM staining occurred in dentate gyrus of hippocampus, subventricular zone and pyriform cortex with amygdaloid kindling in generalized seizure but not in partial seizure. Although PSA-NCAM positive dendrite in dentate gyrus was minimally found in the control brain, it extended slightly in animals with partial seizure, and greatly toward the molecular layer with generalized seizure. Thus, the migration of neural stem cells as well as neural plastic changes were specially and temporally different between brain regions depending on different kindling stages. These changes may mainly contribute to the reorganization of neural network in epileptic brain.

  4. What the success of brain imaging implies about the neural code.

    Science.gov (United States)

    Guest, Olivia; Love, Bradley C

    2017-01-19

    The success of fMRI places constraints on the nature of the neural code. The fact that researchers can infer similarities between neural representations, despite fMRI's limitations, implies that certain neural coding schemes are more likely than others. For fMRI to succeed given its low temporal and spatial resolution, the neural code must be smooth at the voxel and functional level such that similar stimuli engender similar internal representations. Through proof and simulation, we determine which coding schemes are plausible given both fMRI's successes and its limitations in measuring neural activity. Deep neural network approaches, which have been forwarded as computational accounts of the ventral stream, are consistent with the success of fMRI, though functional smoothness breaks down in the later network layers. These results have implications for the nature of the neural code and ventral stream, as well as what can be successfully investigated with fMRI.

  5. Torque vector control using neural network controller for synchronous reluctance motor

    Energy Technology Data Exchange (ETDEWEB)

    Feng, X. [Teco-Westinghouse Motor Co, R and D Center, Round Rock, TX (United States); Belmans, R.; Hameyer, K. [Katholieke Universiteit Leuven, Dic. ELEN, Dept. ESAT, Leuven-Heverlee (Belgium)

    2000-08-01

    This paper presents the torque vector control technique using a neural network controller for a synchronous reluctance motor. As the artificial neural network controller has the advantages of faster execution speed, harmonic ripple immunity and fault tolerance compared to a DSP-based controller, different multi-layer neural network controllers are designed and trained to produce a correct target vector when presented with the corresponding input vector. The trained result and calculated flops show that although the designed three layer controller with tansig, purelin and hard limit functions has more processing layers, the neuron number of each layer is less than that of other kinds of neural network controller, the requiring less flops and yielding faster execution and response. (orig.)

  6. Sensitivity of feedforward neural networks to weight errors--

    Energy Technology Data Exchange (ETDEWEB)

    Stevenson, M.; Widrow, B. (Stanford Univ., CA (USA). Dept. of Engineering-Economic Systems); Winter, R. (U.S. Air Force PSC APO, NY (US))

    1990-03-01

    An important consideration when implementing neural networks with digital or analog hardware of limited precision is the sensitivity of neural networks to weight errors. In this paper, the authors analyze the sensitivity of feedforward layered networks of Adaline elements (threshold logic units) to weight errors. An approximation is derived which expresses the probability of error for an output neuron of a large network ( a network with many neurons per layer) as a function of the percentage change in the weights. As would be expected, the probability of error increases with the number of layers in the network and with the percentage change in the weights. Surprisingly, the probability of error is essentially independent of the number of weights per neuron and of the number of neurons per layer, as long as these numbers are large (on the order of 100 or more).

  7. Critical branching neural networks.

    Science.gov (United States)

    Kello, Christopher T

    2013-01-01

    It is now well-established that intrinsic variations in human neural and behavioral activity tend to exhibit scaling laws in their fluctuations and distributions. The meaning of these scaling laws is an ongoing matter of debate between isolable causes versus pervasive causes. A spiking neural network model is presented that self-tunes to critical branching and, in doing so, simulates observed scaling laws as pervasive to neural and behavioral activity. These scaling laws are related to neural and cognitive functions, in that critical branching is shown to yield spiking activity with maximal memory and encoding capacities when analyzed using reservoir computing techniques. The model is also shown to account for findings of pervasive 1/f scaling in speech and cued response behaviors that are difficult to explain by isolable causes. Issues and questions raised by the model and its results are discussed from the perspectives of physics, neuroscience, computer and information sciences, and psychological and cognitive sciences.

  8. Hidden neural networks

    DEFF Research Database (Denmark)

    Krogh, Anders Stærmose; Riis, Søren Kamaric

    1999-01-01

    A general framework for hybrids of hidden Markov models (HMMs) and neural networks (NNs) called hidden neural networks (HNNs) is described. The article begins by reviewing standard HMMs and estimation by conditional maximum likelihood, which is used by the HNN. In the HNN, the usual HMM probability...... parameters are replaced by the outputs of state-specific neural networks. As opposed to many other hybrids, the HNN is normalized globally and therefore has a valid probabilistic interpretation. All parameters in the HNN are estimated simultaneously according to the discriminative conditional maximum...... likelihood criterion. The HNN can be viewed as an undirected probabilistic independence network (a graphical model), where the neural networks provide a compact representation of the clique functions. An evaluation of the HNN on the task of recognizing broad phoneme classes in the TIMIT database shows clear...

  9. Neural Oscillators Programming Simplified

    Directory of Open Access Journals (Sweden)

    Patrick McDowell

    2012-01-01

    Full Text Available The neurological mechanism used for generating rhythmic patterns for functions such as swallowing, walking, and chewing has been modeled computationally by the neural oscillator. It has been widely studied by biologists to model various aspects of organisms and by computer scientists and robotics engineers as a method for controlling and coordinating the gaits of walking robots. Although there has been significant study in this area, it is difficult to find basic guidelines for programming neural oscillators. In this paper, the authors approach neural oscillators from a programmer’s point of view, providing background and examples for developing neural oscillators to generate rhythmic patterns that can be used in biological modeling and robotics applications.

  10. Neural networks and graph theory

    Institute of Scientific and Technical Information of China (English)

    许进; 保铮

    2002-01-01

    The relationships between artificial neural networks and graph theory are considered in detail. The applications of artificial neural networks to many difficult problems of graph theory, especially NP-complete problems, and the applications of graph theory to artificial neural networks are discussed. For example graph theory is used to study the pattern classification problem on the discrete type feedforward neural networks, and the stability analysis of feedback artificial neural networks etc.

  11. Cascade recursion models of computing the temperatures of underground layers

    Institute of Scientific and Technical Information of China (English)

    HAN; Liqun; BI; Siwen; SONG; Shixin

    2006-01-01

    An RBF neural network was used to construct computational models of the underground temperatures of different layers, using ground-surface parameters and the temperatures of various underground layers. Because series recursion models also enable researchers to use above-ground surface parameters to compute the temperatures of different underground layers, this method provides a new way of using thermal infrared remote sensing to monitor the suture zones of large areas of blocks and to research thermal anomalies in geologic structures.

  12. Building a Neural Computer

    OpenAIRE

    Carreira, Paulo J.F.; Rosa, Miguel A.; Neto, João Pedro; Costa, José Félix

    1998-01-01

    In the work of [Siegelmann 95] it was showed that Artificial Recursive Neural Networks have the same computing power as Turing machines. A Turing machine can be programmed in a proper high-level language - the language of partial recursive functions. In this paper we present the implementation of a compiler that directly translates high-level Turing machine programs to Artificial Recursive Neural Networks. The application contains a simulator that can be used to test the resulting networks. W...

  13. Neural cryptography with feedback

    Science.gov (United States)

    Ruttor, Andreas; Kinzel, Wolfgang; Shacham, Lanir; Kanter, Ido

    2004-04-01

    Neural cryptography is based on a competition between attractive and repulsive stochastic forces. A feedback mechanism is added to neural cryptography which increases the repulsive forces. Using numerical simulations and an analytic approach, the probability of a successful attack is calculated for different model parameters. Scaling laws are derived which show that feedback improves the security of the system. In addition, a network with feedback generates a pseudorandom bit sequence which can be used to encrypt and decrypt a secret message.

  14. Imaging the Neural Symphony.

    Science.gov (United States)

    Svoboda, Karel

    2016-01-01

    Since the start of the new millennium, a method called two-photon microscopy has allowed scientists to peer farther into the brain than ever before. Our author, one of the pioneers in the development of this new technology, writes that "directly observing the dynamics of neural networks in an intact brain has become one of the holy grails of brain research." His article describes the advances that led to this remarkable breakthrough-one that is helping neuroscientists better understand neural networks.

  15. Building a Neural Computer

    OpenAIRE

    1998-01-01

    In the work of [Siegelmann 95] it was showed that Artificial Recursive Neural Networks have the same computing power as Turing machines. A Turing machine can be programmed in a proper high-level language - the language of partial recursive functions. In this paper we present the implementation of a compiler that directly translates high-level Turing machine programs to Artificial Recursive Neural Networks. The application contains a simulator that can be used to test the resulting networks. W...

  16. Neural cryptography with feedback.

    Science.gov (United States)

    Ruttor, Andreas; Kinzel, Wolfgang; Shacham, Lanir; Kanter, Ido

    2004-04-01

    Neural cryptography is based on a competition between attractive and repulsive stochastic forces. A feedback mechanism is added to neural cryptography which increases the repulsive forces. Using numerical simulations and an analytic approach, the probability of a successful attack is calculated for different model parameters. Scaling laws are derived which show that feedback improves the security of the system. In addition, a network with feedback generates a pseudorandom bit sequence which can be used to encrypt and decrypt a secret message.

  17. Porosity Log Prediction Using Artificial Neural Network

    Science.gov (United States)

    Dwi Saputro, Oki; Lazuardi Maulana, Zulfikar; Dzar Eljabbar Latief, Fourier

    2016-08-01

    Well logging is important in oil and gas exploration. Many physical parameters of reservoir is derived from well logging measurement. Geophysicists often use well logging to obtain reservoir properties such as porosity, water saturation and permeability. Most of the time, the measurement of the reservoir properties are considered expensive. One of method to substitute the measurement is by conducting a prediction using artificial neural network. In this paper, artificial neural network is performed to predict porosity log data from other log data. Three well from ‘yy’ field are used to conduct the prediction experiment. The log data are sonic, gamma ray, and porosity log. One of three well is used as training data for the artificial neural network which employ the Levenberg-Marquardt Backpropagation algorithm. Through several trials, we devise that the most optimal input training is sonic log data and gamma ray log data with 10 hidden layer. The prediction result in well 1 has correlation of 0.92 and mean squared error of 5.67 x10-4. Trained network apply to other well data. The result show that correlation in well 2 and well 3 is 0.872 and 0.9077 respectively. Mean squared error in well 2 and well 3 is 11 x 10-4 and 9.539 x 10-4. From the result we can conclude that sonic log and gamma ray log could be good combination for predicting porosity with neural network.

  18. Neural networks in windprofiler data processing

    Science.gov (United States)

    Weber, H.; Richner, H.; Kretzschmar, R.; Ruffieux, D.

    2003-04-01

    Wind profilers are basically Doppler radars yielding 3-dimensional wind profiles that are deduced from the Doppler shift caused by turbulent elements in the atmosphere. These signals can be contaminated by other airborne elements such as birds or hydrometeors. Using a feed-forward neural network with one hidden layer and one output unit, birds and hydrometeors can be successfully identified in non-averaged single spectra; theses are subsequently removed in the wind computation. An infrared camera was used to identify birds in one of the beams of the wind profiler. After training the network with about 6000 contaminated data sets, it was able to identify contaminated data in a test data set with a reliability of 96 percent. The assumption was made that the neural network parameters obtained in the beam for which bird data was collected can be transferred to the other beams (at least three beams are needed for computing wind vectors). Comparing the evolution of a wind field with and without the neural network shows a significant improvement of wind data quality. Current work concentrates on training the network also for hydrometeors. It is hoped that the instrument's capability can thus be expanded to measure not only correct winds, but also observe bird migration, estimate precipitation and -- by combining precipitation information with vertical velocity measurement -- the monitoring of the height of the melting layer.

  19. Convolutional Neural Network Based dem Super Resolution

    Science.gov (United States)

    Chen, Zixuan; Wang, Xuewen; Xu, Zekai; Hou, Wenguang

    2016-06-01

    DEM super resolution is proposed in our previous publication to improve the resolution for a DEM on basis of some learning examples. Meanwhile, the nonlocal algorithm is introduced to deal with it and lots of experiments show that the strategy is feasible. In our publication, the learning examples are defined as the partial original DEM and their related high measurements due to this way can avoid the incompatibility between the data to be processed and the learning examples. To further extent the applications of this new strategy, the learning examples should be diverse and easy to obtain. Yet, it may cause the problem of incompatibility and unrobustness. To overcome it, we intend to investigate a convolutional neural network based method. The input of the convolutional neural network is a low resolution DEM and the output is expected to be its high resolution one. A three layers model will be adopted. The first layer is used to detect some features from the input, the second integrates the detected features to some compressed ones and the final step transforms the compressed features as a new DEM. According to this designed structure, some learning DEMs will be taken to train it. Specifically, the designed network will be optimized by minimizing the error of the output and its expected high resolution DEM. In practical applications, a testing DEM will be input to the convolutional neural network and a super resolution will be obtained. Many experiments show that the CNN based method can obtain better reconstructions than many classic interpolation methods.

  20. Neutron spectrum unfolding using neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E. [Universidad Autonoma de Zacatecas, A.P. 336, 98000 Zacatecas (Mexico)]. E-mail: rvega@cantera.reduaz.mx

    2004-07-01

    An artificial neural network has been designed to obtain the neutron spectra from the Bonner spheres spectrometer's count rates. The neural network was trained using a large set of neutron spectra compiled by the International Atomic Energy Agency. These include spectra from iso- topic neutron sources, reference and operational neutron spectra obtained from accelerators and nuclear reactors. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. Re-binned spectra and UTA4 matrix were used to calculate the expected count rates in Bonner spheres spectrometer. These count rates were used as input and correspondent spectrum was used as output during neural network training. The network has 7 input nodes, 56 neurons as hidden layer and 31 neurons in the output layer. After training the network was tested with the Bonner spheres count rates produced by twelve neutron spectra. The network allows unfolding the neutron spectrum from count rates measured with Bonner spheres. Good results are obtained when testing count rates belong to neutron spectra used during training, acceptable results are obtained for count rates obtained from actual neutron fields; however the network fails when count rates belong to monoenergetic neutron sources. (Author)

  1. Temporal solar irradiance variability analysis using neural networks

    Science.gov (United States)

    Tebabal, Ambelu; Damtie, Baylie; Nigussie, Melessew

    A feed-forward neural network which can account for nonlinear relationship was used to model total solar irradiance (TSI). A single layer feed-forward neural network with Levenberg-marquardt back-propagation algorithm have been implemented for modeling daily total solar irradiance from daily photometric sunspot index, and core-to-wing ratio of Mg II index data. In order to obtain the optimum neural network for TSI modeling, the root mean square error (RMSE) and mean absolute error (MAE) have been taken into account. The modeled and measured TSI have the correlation coefficient of about R=0.97. The neural networks (NNs) model output indicates that reconstructed TSI from solar proxies (photometric sunspot index and Mg II) can explain 94% of the variance of TSI. This modeled TSI using NNs further strengthens the view that surface magnetism indeed plays a dominant role in modulating solar irradiance.

  2. A NEURAL NETWORK STUDY ON GLASS TRANSITION TEMPERATURE OF POLYMERS

    Institute of Scientific and Technical Information of China (English)

    Lin-xi Zhanga; De-lu Zhao; You-xing Huang

    2002-01-01

    In this paper, an artificial neural network model is adopted to study the glass transition temperature of polymers. In our artificial neural networks, the input nodes are the characteristic ratio C∞, the average molecular weight M, between entanglement points and the molecular weight Mmon of repeating unit. The output node is the glass transition temperature Tg,and the number of the hidden layer is 6. We found that the artificial neural network simulations are accurate in predicting the outcome for polymers for which it is not trained. The maximum relative error for predicting of the glass transition temperature is 3.47%, and the overall average error is only 2.27%. Artificial neural networks may provide some new ideas to investigate other properties of the polymers.

  3. High Accuracy Human Activity Monitoring using Neural network

    CERN Document Server

    Sharma, Annapurna; Chung, Wan-Young

    2011-01-01

    This paper presents the designing of a neural network for the classification of Human activity. A Triaxial accelerometer sensor, housed in a chest worn sensor unit, has been used for capturing the acceleration of the movements associated. All the three axis acceleration data were collected at a base station PC via a CC2420 2.4GHz ISM band radio (zigbee wireless compliant), processed and classified using MATLAB. A neural network approach for classification was used with an eye on theoretical and empirical facts. The work shows a detailed description of the designing steps for the classification of human body acceleration data. A 4-layer back propagation neural network, with Levenberg-marquardt algorithm for training, showed best performance among the other neural network training algorithms.

  4. System Identification, Prediction, Simulation and Control with Neural Networks

    DEFF Research Database (Denmark)

    Sørensen, O.

    1997-01-01

    The intention of this paper is to make a systematic examination of the possibilities of applying neural networks in those technical areas, which are familiar to a control engineer. In other words, the potential of neural networks in control applications is given higher priority than a detailed...... study of the networks themselves. With this end in view the following restrictions have been made: 1) Amongst numerous neural network structures, only the Multi Layer Perceptron (a feed-forward network) is applied. 2) Amongst numerous training algorithms, only the Recursive Prediction Error Method using...... a Gauss-Newton search direction is applied. 3) Amongst numerous model types, often met in control applications, only the Non-linear ARMAX (NARMAX) model, representing input/output description, is examined. A simulated example confirms that a neural network has the potential to perform excellent System...

  5. A neural network based seafloor classification using acoustic backscatter

    Digital Repository Service at National Institute of Oceanography (India)

    Chakraborty, B.

    This paper presents a study results of the Artificial Neural Network (ANN) architectures [Self-Organizing Map (SOM) and Multi-Layer Perceptron (MLP)] using single beam echosounding data. The single beam echosounder, operable at 12 kHz, has been used...

  6. Some structural determinants of Pavlovian conditioning in artificial neural networks

    NARCIS (Netherlands)

    Sanchez, Jose M.; Galeazzi, Juan M.; Burgos, Jose E.

    2010-01-01

    This paper investigates the possible role of neuroanatomical features in Pavlovian conditioning, via computer simulations with layered, feedforward artificial neural networks. The networks' structure and functioning are described by a strongly bottom-up model that takes into account the roles of hip

  7. Optimization of recurrent neural networks for time series modeling

    DEFF Research Database (Denmark)

    Pedersen, Morten With

    1997-01-01

    The present thesis is about optimization of recurrent neural networks applied to time series modeling. In particular is considered fully recurrent networks working from only a single external input, one layer of nonlinear hidden units and a li near output unit applied to prediction of discrete time...

  8. Design of fiber optic adaline neural networks

    Science.gov (United States)

    Ghosh, Anjan K.; Trepka, Jim

    1997-03-01

    Based on possible optoelectronic realization of adaptive filters and equalizers using fiber optic tapped delay lines and spatial light modulators we describe the design of a single-layer fiber optic Adaline neural network that can be used as a bit pattern classifier. In our design, we employ as few electronic devices as possible and use optical computation to utilize the advantages of optics in processing speed, parallelism, and interconnection. The described new optical neural network design is for optical processing of guided light wave signals, not electronic signals. We analyze the convergence or learning characteristics of the optoelectronic Adaline in the presence of errors in the hardware. We show that with such an optoelectronic Adaline it is possible to detect a desired code word/token/header with good accuracy.

  9. Supervised Sequence Labelling with Recurrent Neural Networks

    CERN Document Server

    Graves, Alex

    2012-01-01

    Supervised sequence labelling is a vital area of machine learning, encompassing tasks such as speech, handwriting and gesture recognition, protein secondary structure prediction and part-of-speech tagging. Recurrent neural networks are powerful sequence learning tools—robust to input noise and distortion, able to exploit long-range contextual information—that would seem ideally suited to such problems. However their role in large-scale sequence labelling systems has so far been auxiliary.    The goal of this book is a complete framework for classifying and transcribing sequential data with recurrent neural networks only. Three main innovations are introduced in order to realise this goal. Firstly, the connectionist temporal classification output layer allows the framework to be trained with unsegmented target sequences, such as phoneme-level speech transcriptions; this is in contrast to previous connectionist approaches, which were dependent on error-prone prior segmentation. Secondly, multidimensional...

  10. Hopfield neural network based on ant system

    Institute of Scientific and Technical Information of China (English)

    洪炳镕; 金飞虎; 郭琦

    2004-01-01

    Hopfield neural network is a single layer feedforward neural network. Hopfield network requires some control parameters to be carefully selected, else the network is apt to converge to local minimum. An ant system is a nature inspired meta heuristic algorithm. It has been applied to several combinatorial optimization problems such as Traveling Salesman Problem, Scheduling Problems, etc. This paper will show an ant system may be used in tuning the network control parameters by a group of cooperated ants. The major advantage of this network is to adjust the network parameters automatically, avoiding a blind search for the set of control parameters.This network was tested on two TSP problems, 5 cities and 10 cities. The results have shown an obvious improvement.

  11. Olfactory Decoding Method Using Neural Spike Signals

    Institute of Scientific and Technical Information of China (English)

    Kyung-jin YOU; Hyun-chool SHIN

    2010-01-01

    This paper presents a travel method for inferring the odor based on naval activities observed from rats'main olfactory bulbs.Mufti-channel extmcellular single unit recordings are done by microwire electrodes(Tungsten,50μm,32 channels)innplanted in the mitral/tufted cell layers of the main olfactory bulb of the anesthetized rats to obtain neural responses to various odors.Neural responses as a key feature are measured by subtraction firing rates before stimulus from after.For odor irderenoe,a decoding method is developed based on the ML estimation.The results show that the average decoding acauacy is about 100.0%,96.0%,and 80.0% with three rats,respectively.This wait has profound implications for a novel brain-madune interface system far odor inference.

  12. Forecasting increasing rate of power consumption based on immune genetic algorithm combined with neural network

    Institute of Scientific and Technical Information of China (English)

    杨淑霞

    2008-01-01

    Considering the factors affecting the increasing rate of power consumption, the BP neural network structure and the neural network forecasting model of the increasing rate of power consumption were established. Immune genetic algorithm was applied to optimizing the weight from input layer to hidden layer, from hidden layer to output layer, and the threshold value of neuron nodes in hidden and output layers. Finally, training the related data of the increasing rate of power consumption from 1980 to 2000 in China, a nonlinear network model between the increasing rate of power consumption and influencing factors was obtained. The model was adopted to forecasting the increasing rate of power consumption from 2001 to 2005, and the average absolute error ratio of forecasting results is 13.521 8%. Compared with the ordinary neural network optimized by genetic algorithm, the results show that this method has better forecasting accuracy and stability for forecasting the increasing rate of power consumption.

  13. Neural networks in seismic discrimination

    Energy Technology Data Exchange (ETDEWEB)

    Dowla, F.U.

    1995-01-01

    Neural networks are powerful and elegant computational tools that can be used in the analysis of geophysical signals. At Lawrence Livermore National Laboratory, we have developed neural networks to solve problems in seismic discrimination, event classification, and seismic and hydrodynamic yield estimation. Other researchers have used neural networks for seismic phase identification. We are currently developing neural networks to estimate depths of seismic events using regional seismograms. In this paper different types of network architecture and representation techniques are discussed. We address the important problem of designing neural networks with good generalization capabilities. Examples of neural networks for treaty verification applications are also described.

  14. Landmine detection and classification with complex-valued hybrid neural network using scattering parameters dataset.

    Science.gov (United States)

    Yang, Chih-Chung; Bose, N K

    2005-05-01

    Neural networks have been applied to landmine detection from data generated by different kinds of sensors. Real-valued neural networks have been used for detecting landmines from scattering parameters measured by ground penetrating radar (GPR) after disregarding phase information. This paper presents results using complex-valued neural networks, capable of phase-sensitive detection followed by classification. A two-layer hybrid neural network structure incorporating both supervised and unsupervised learning is proposed to detect and then classify the types of landmines. Tests are also reported on a benchmark data.

  15. The study of fuzzy chaotic neural network based on chaotic method

    Institute of Scientific and Technical Information of China (English)

    WANG Ke-jun; TANG Mo; ZHANG Yan

    2006-01-01

    This paper proposes a type of Fuzzy Chaotic Neural Network (FCNN). Firstly, the model of recurrent fuzzy neural network (RFNN) is considered, which adds a feedback in the second layer to realize dynamic map. Then, the Logistic map is introduced into the recurrent fuzzy neural network, so as to build a Fuzzy Chaotic Neural Network (FCNN). Its chaotic character is analyzed, and then the training algorithm and associate memory ability are studied subsequently. And then, a chaotic system is approximated using FCNN; the simulation results indicate that FCNN could approach dynamic system preferably. And owing to the introducing of chaotic map, the chaotic recollect capacity of FCNN is increased.

  16. Optical implementation of a feature-based neural network with application to automatic target recognition

    Science.gov (United States)

    Chao, Tien-Hsin; Stoner, William W.

    1993-01-01

    An optical neural network based on the neocognitron paradigm is introduced. A novel aspect of the architecture design is shift-invariant multichannel Fourier optical correlation within each processing layer. Multilayer processing is achieved by feeding back the ouput of the feature correlator interatively to the input spatial light modulator and by updating the Fourier filters. By training the neural net with characteristic features extracted from the target images, successful pattern recognition with intraclass fault tolerance and interclass discrimination is achieved. A detailed system description is provided. Experimental demonstrations of a two-layer neural network for space-object discrimination is also presented.

  17. Mechanical roles of apical constriction, cell elongation, and cell migration during neural tube formation in Xenopus.

    Science.gov (United States)

    Inoue, Yasuhiro; Suzuki, Makoto; Watanabe, Tadashi; Yasue, Naoko; Tateo, Itsuki; Adachi, Taiji; Ueno, Naoto

    2016-12-01

    Neural tube closure is an important and necessary process during the development of the central nervous system. The formation of the neural tube structure from a flat sheet of neural epithelium requires several cell morphogenetic events and tissue dynamics to account for the mechanics of tissue deformation. Cell elongation changes cuboidal cells into columnar cells, and apical constriction then causes them to adopt apically narrow, wedge-like shapes. In addition, the neural plate in Xenopus is stratified, and the non-neural cells in the deep layer (deep cells) pull the overlying superficial cells, eventually bringing the two layers of cells to the midline. Thus, neural tube closure appears to be a complex event in which these three physical events are considered to play key mechanical roles. To test whether these three physical events are mechanically sufficient to drive neural tube formation, we employed a three-dimensional vertex model and used it to simulate the process of neural tube closure. The results suggest that apical constriction cued the bending of the neural plate by pursing the circumference of the apical surface of the neural cells. Neural cell elongation in concert with apical constriction further narrowed the apical surface of the cells and drove the rapid folding of the neural plate, but was insufficient for complete neural tube closure. Migration of the deep cells provided the additional tissue deformation necessary for closure. To validate the model, apical constriction and cell elongation were inhibited in Xenopus laevis embryos. The resulting cell and tissue shapes resembled the corresponding simulation results.

  18. Classification of clustered microcalcifications using a Shape Cognitron neural network.

    Science.gov (United States)

    Lee, San Kan; Chung, Pau choo; Chang, Chein I; Lo, Chien Shun; Lee, Tain; Hsu, Giu Cheng; Yang, Chin Wen

    2003-01-01

    A new shape recognition-based neural network built with universal feature planes, called Shape Cognitron (S-Cognitron) is introduced to classify clustered microcalcifications. The architecture of S-Cognitron consists of two modules and an extra layer, called 3D figure layer lies in between. The first module contains a shape orientation layer, built with 20 cell planes of low level universal shape features to convert first-order shape orientations into numeric values, and a complex layer, to extract second-order shape features. The 3D figure layer is a feature extract-display layer that extracts the shape curvatures of an input pattern and displays them as a 3D figure. It is then followed by a second module made up of a feature formation layer and a probabilistic neural network-based classification layer. The system is evaluated by using Nijmegen mammogram database and experimental results show that sensitivity and specificity can reach 86.1 and 74.1%, respectively.

  19. An FGF3-BMP Signaling Axis Regulates Caudal Neural Tube Closure, Neural Crest Specification and Anterior-Posterior Axis Extension.

    Directory of Open Access Journals (Sweden)

    Matthew J Anderson

    2016-05-01

    Full Text Available During vertebrate axis extension, adjacent tissue layers undergo profound morphological changes: within the neuroepithelium, neural tube closure and neural crest formation are occurring, while within the paraxial mesoderm somites are segmenting from the presomitic mesoderm (PSM. Little is known about the signals between these tissues that regulate their coordinated morphogenesis. Here, we analyze the posterior axis truncation of mouse Fgf3 null homozygotes and demonstrate that the earliest role of PSM-derived FGF3 is to regulate BMP signals in the adjacent neuroepithelium. FGF3 loss causes elevated BMP signals leading to increased neuroepithelium proliferation, delay in neural tube closure and premature neural crest specification. We demonstrate that elevated BMP4 depletes PSM progenitors in vitro, phenocopying the Fgf3 mutant, suggesting that excessive BMP signals cause the Fgf3 axis defect. To test this in vivo we increased BMP signaling in Fgf3 mutants by removing one copy of Noggin, which encodes a BMP antagonist. In such mutants, all parameters of the Fgf3 phenotype were exacerbated: neural tube closure delay, premature neural crest specification, and premature axis termination. Conversely, genetically decreasing BMP signaling in Fgf3 mutants, via loss of BMP receptor activity, alleviates morphological defects. Aberrant apoptosis is observed in the Fgf3 mutant tailbud. However, we demonstrate that cell death does not cause the Fgf3 phenotype: blocking apoptosis via deletion of pro-apoptotic genes surprisingly increases all Fgf3 defects including causing spina bifida. We demonstrate that this counterintuitive consequence of blocking apoptosis is caused by the increased survival of BMP-producing cells in the neuroepithelium. Thus, we show that FGF3 in the caudal vertebrate embryo regulates BMP signaling in the neuroepithelium, which in turn regulates neural tube closure, neural crest specification and axis termination. Uncovering this FGF3

  20. An FGF3-BMP Signaling Axis Regulates Caudal Neural Tube Closure, Neural Crest Specification and Anterior-Posterior Axis Extension.

    Science.gov (United States)

    Anderson, Matthew J; Schimmang, Thomas; Lewandoski, Mark

    2016-05-01

    During vertebrate axis extension, adjacent tissue layers undergo profound morphological changes: within the neuroepithelium, neural tube closure and neural crest formation are occurring, while within the paraxial mesoderm somites are segmenting from the presomitic mesoderm (PSM). Little is known about the signals between these tissues that regulate their coordinated morphogenesis. Here, we analyze the posterior axis truncation of mouse Fgf3 null homozygotes and demonstrate that the earliest role of PSM-derived FGF3 is to regulate BMP signals in the adjacent neuroepithelium. FGF3 loss causes elevated BMP signals leading to increased neuroepithelium proliferation, delay in neural tube closure and premature neural crest specification. We demonstrate that elevated BMP4 depletes PSM progenitors in vitro, phenocopying the Fgf3 mutant, suggesting that excessive BMP signals cause the Fgf3 axis defect. To test this in vivo we increased BMP signaling in Fgf3 mutants by removing one copy of Noggin, which encodes a BMP antagonist. In such mutants, all parameters of the Fgf3 phenotype were exacerbated: neural tube closure delay, premature neural crest specification, and premature axis termination. Conversely, genetically decreasing BMP signaling in Fgf3 mutants, via loss of BMP receptor activity, alleviates morphological defects. Aberrant apoptosis is observed in the Fgf3 mutant tailbud. However, we demonstrate that cell death does not cause the Fgf3 phenotype: blocking apoptosis via deletion of pro-apoptotic genes surprisingly increases all Fgf3 defects including causing spina bifida. We demonstrate that this counterintuitive consequence of blocking apoptosis is caused by the increased survival of BMP-producing cells in the neuroepithelium. Thus, we show that FGF3 in the caudal vertebrate embryo regulates BMP signaling in the neuroepithelium, which in turn regulates neural tube closure, neural crest specification and axis termination. Uncovering this FGF3-BMP signaling axis is

  1. Optical neural computing for associative memories

    Energy Technology Data Exchange (ETDEWEB)

    Hsu, Ken Yuh.

    1990-01-01

    Optical techniques for implementing neural computers are presented. In particular, holographic associative memories with feedback are investigated. Characteristics of optical neurons and optical interconnections are discussed. An LCLV is used for simulating a 2-D array of approximately 160,000 optical neurons. Thermoplastic plates are used for providing holographic interconnections among these neurons. The problem of degenerate readout in holographic interconnections and the method of sampling grids to solve this problem are presented. Two optical neural networks for associative memories are implemented and demonstrated. The first one is an optical implementation of the Hopfield network. It performs the function of auto-association that recognizes 2-D images from a distorted or partially blocked input. The trade-off between distortion tolerance and discrimination capability against new images is discussed. The second optical loop is a 2-layer network with feedback. It performs the function of hetero-association, which locks the recognized input and its associated image as a stable state in the loop. In both optical loops, it is shown that the neural gain and the similarity between the input and the stored images are the main factors that determine the dynamics of the network. Neural network models for the optical loops are presented. Equations of motion for describing the dynamical behavior of the systems are derived. The reciprocal vector basis corresponding to stored images is derived. A geometrical method is then introduced which allows us to inspect the convergence property of the system. It is also shown that the main factors that determine the system dynamics are the neural gain and the initial conditions. Photorefractive holography for optical interconnections and sampling grids for volume holographic interconnections are presented.

  2. Construction of a Piezoresistive Neural Sensor Array

    Science.gov (United States)

    Carlson, W. B.; Schulze, W. A.; Pilgrim, P. M.

    1996-01-01

    The construction of a piezoresistive - piezoelectric sensor (or actuator) array is proposed using 'neural' connectivity for signal recognition and possible actuation functions. A closer integration of the sensor and decision functions is necessary in order to achieve intrinsic identification within the sensor. A neural sensor is the next logical step in development of truly 'intelligent' arrays. This proposal will integrate 1-3 polymer piezoresistors and MLC electroceramic devices for applications involving acoustic identification. The 'intelligent' piezoresistor -piezoelectric system incorporates printed resistors, composite resistors, and a feedback for the resetting of resistances. A model of a design is proposed in order to simulate electromechanical resistor interactions. The goal of optimizing a sensor geometry for improving device reliability, training, & signal identification capabilities is the goal of this work. At present, studies predict performance of a 'smart' device with a significant control of 'effective' compliance over a narrow pressure range due to a piezoresistor percolation threshold. An interesting possibility may be to use an array of control elements to shift the threshold function in order to change the level of resistance in a neural sensor array for identification, or, actuation applications. The proposed design employs elements of: (1) conductor loaded polymers for a 'fast' RC time constant response; and (2) multilayer ceramics for actuation or sensing and shifting of resistance in the polymer. Other material possibilities also exist using magnetoresistive layered systems for shifting the resistance. It is proposed to use a neural net configuration to test and to help study the possible changes required in the materials design of these devices. Numerical design models utilize electromechanical elements, in conjunction with structural elements in order to simulate piezoresistively controlled actuators and changes in resistance of sensors

  3. Prediction of the plasma distribution using an artificial neural network

    Institute of Scientific and Technical Information of China (English)

    Li Wei; Chen JunFang; Wang Teng

    2009-01-01

    In this work, an artificial neural network (ANN) model is established using a back-propagation training algorithm in order to predict the plasma spatial distribution in an electron cyclotron resonance (ECR) - plasma-enhanced chemical vapor deposition (PECVD) plasma system. In our model, there are three layers: the input layer, the hidden layer and the output layer. The input layer is composed of five neurons: the radial position, the axial position, the gas pressure,the microwave power and the magnet coil current. The output layer is our target output neuron: the plasma density.The accuracy of our prediction is tested with the experimental data obtained by a Langmuir probe, and ANN results show a good agreement with the experimental data. It is concluded that ANN is a useful tool in dealing with some nonlinear problems of the plasma spatial distribution.

  4. Quantitative analysis of volatile organic compounds using ion mobility spectra and cascade correlation neural networks

    Science.gov (United States)

    Harrington, Peter DEB.; Zheng, Peng

    1995-01-01

    Ion Mobility Spectrometry (IMS) is a powerful technique for trace organic analysis in the gas phase. Quantitative measurements are difficult, because IMS has a limited linear range. Factors that may affect the instrument response are pressure, temperature, and humidity. Nonlinear calibration methods, such as neural networks, may be ideally suited for IMS. Neural networks have the capability of modeling complex systems. Many neural networks suffer from long training times and overfitting. Cascade correlation neural networks train at very fast rates. They also build their own topology, that is a number of layers and number of units in each layer. By controlling the decay parameter in training neural networks, reproducible and general models may be obtained.

  5. Training Data Requirement for a Neural Network to Predict Aerodynamic Coefficients

    Science.gov (United States)

    Korsmeyer, David (Technical Monitor); Rajkumar, T.; Bardina, Jorge

    2003-01-01

    Basic aerodynamic coefficients are modeled as functions of angle of attack, speed brake deflection angle, Mach number, and side slip angle. Most of the aerodynamic parameters can be well-fitted using polynomial functions. We previously demonstrated that a neural network is a fast, reliable way of predicting aerodynamic coefficients. We encountered few under fitted and/or over fitted results during prediction. The training data for the neural network are derived from wind tunnel test measurements and numerical simulations. The basic questions that arise are: how many training data points are required to produce an efficient neural network prediction, and which type of transfer functions should be used between the input-hidden layer and hidden-output layer. In this paper, a comparative study of the efficiency of neural network prediction based on different transfer functions and training dataset sizes is presented. The results of the neural network prediction reflect the sensitivity of the architecture, transfer functions, and training dataset size.

  6. Rule Extraction:Using Neural Networks or for Neural Networks?

    Institute of Scientific and Technical Information of China (English)

    Zhi-Hua Zhou

    2004-01-01

    In the research of rule extraction from neural networks, fidelity describes how well the rules mimic the behavior of a neural network while accuracy describes how well the rules can be generalized. This paper identifies the fidelity-accuracy dilemma. It argues to distinguish rule extraction using neural networks and rule extraction for neural networks according to their different goals, where fidelity and accuracy should be excluded from the rule quality evaluation framework, respectively.

  7. Multi-Layered Feedforward Neural Networks for Image Segmentation

    Science.gov (United States)

    1991-12-01

    the Gram-Schmidt Network ...................... 80 xi Preface WILLIAM SHAKESPEARE 1564-1616 Is this a dagger which I see before me, The handle toward...but a dagger of the mind, a false creation, proceeding from the heat-oppressed brain? MACBETH Act Two, Scene One The difference between mysticism and

  8. Fuzzy Multiresolution Neural Networks

    Science.gov (United States)

    Ying, Li; Qigang, Shang; Na, Lei

    A fuzzy multi-resolution neural network (FMRANN) based on particle swarm algorithm is proposed to approximate arbitrary nonlinear function. The active function of the FMRANN consists of not only the wavelet functions, but also the scaling functions, whose translation parameters and dilation parameters are adjustable. A set of fuzzy rules are involved in the FMRANN. Each rule either corresponding to a subset consists of scaling functions, or corresponding to a sub-wavelet neural network consists of wavelets with same dilation parameters. Incorporating the time-frequency localization and multi-resolution properties of wavelets with the ability of self-learning of fuzzy neural network, the approximation ability of FMRANN can be remarkable improved. A particle swarm algorithm is adopted to learn the translation and dilation parameters of the wavelets and adjusting the shape of membership functions. Simulation examples are presented to validate the effectiveness of FMRANN.

  9. Single-Iteration Learning Algorithm for Feed-Forward Neural Networks

    Energy Technology Data Exchange (ETDEWEB)

    Barhen, J.; Cogswell, R.; Protopopescu, V.

    1999-07-31

    A new methodology for neural learning is presented, whereby only a single iteration is required to train a feed-forward network with near-optimal results. To this aim, a virtual input layer is added to the multi-layer architecture. The virtual input layer is connected to the nominal input layer by a specird nonlinear transfer function, and to the fwst hidden layer by regular (linear) synapses. A sequence of alternating direction singular vrdue decompositions is then used to determine precisely the inter-layer synaptic weights. This algorithm exploits the known separability of the linear (inter-layer propagation) and nonlinear (neuron activation) aspects of information &ansfer within a neural network.

  10. Introduction to Artificial Neural Networks

    DEFF Research Database (Denmark)

    Larsen, Jan

    1999-01-01

    The note addresses introduction to signal analysis and classification based on artificial feed-forward neural networks.......The note addresses introduction to signal analysis and classification based on artificial feed-forward neural networks....

  11. Regulation of endogenous neural stem/progenitor cells for neural repair - factors that promote neurogenesis and gliogenesis in the normal and damaged brain

    Directory of Open Access Journals (Sweden)

    Kimberly eChristie

    2013-01-01

    Full Text Available Neural stem/precursor cells in the adult brain reside in the subventricular zone (SVZ of the lateral ventricles and the subgranular zone (SGZ of the dentate gyrus in the hippocampus. These cells primarily generate neuroblasts that normally migrate to the olfactory bulb and the dentate granule cell layer respectively. Following brain damage, such as traumatic brain injury, ischemic stroke or in degenerative disease models, neural precursor cells from the SVZ in particular, can migrate from their normal route along the rostral migratory stream to the site of neural damage. This neural precursor cell response to neural damage is mediated by release of endogenous factors, including cytokines and chemokines produced by the inflammatory response at the injury site, and by the production of growth and neurotrophic factors. Endogenous hippocampal neurogenesis is frequently also directly or indirectly affected by neural damage. Administration of a variety of factors that regulate different aspects of neural stem/precursor biology often leads to improved functional motor and/or behavioural outcomes. Such factors can target neural stem/precursor proliferation, survival, migration and differentiation into appropriate neuronal or glial lineages. Newborn cells also need to subsequently survive and functionally integrate into extant neural circuitry, which may be the major bottleneck to the current therapeutic potential of neural stem/precursor cells. This review will cover the effects of a range of intrinsic and extrinsic factors that regulate neural stem /precursor cell functions. In particular it focuses on factors that may be harnessed to enhance the endogenous neural stem/precursor cell response to neural damage, highlighting those that have already shown evidence of preclinical effectiveness and discussing others that warrant further preclinical investigation.

  12. Existence and uniqueness results for neural network approximations.

    Science.gov (United States)

    Williamson, R C; Helmke, U

    1995-01-01

    Some approximation theoretic questions concerning a certain class of neural networks are considered. The networks considered are single input, single output, single hidden layer, feedforward neural networks with continuous sigmoidal activation functions, no input weights but with hidden layer thresholds and output layer weights. Specifically, questions of existence and uniqueness of best approximations on a closed interval of the real line under mean-square and uniform approximation error measures are studied. A by-product of this study is a reparametrization of the class of networks considered in terms of rational functions of a single variable. This rational reparametrization is used to apply the theory of Pade approximation to the class of networks considered. In addition, a question related to the number of local minima arising in gradient algorithms for learning is examined.

  13. Glaucoma detection based on deep convolutional neural network.

    Science.gov (United States)

    Xiangyu Chen; Yanwu Xu; Damon Wing Kee Wong; Tien Yin Wong; Jiang Liu

    2015-08-01

    Glaucoma is a chronic and irreversible eye disease, which leads to deterioration in vision and quality of life. In this paper, we develop a deep learning (DL) architecture with convolutional neural network for automated glaucoma diagnosis. Deep learning systems, such as convolutional neural networks (CNNs), can infer a hierarchical representation of images to discriminate between glaucoma and non-glaucoma patterns for diagnostic decisions. The proposed DL architecture contains six learned layers: four convolutional layers and two fully-connected layers. Dropout and data augmentation strategies are adopted to further boost the performance of glaucoma diagnosis. Extensive experiments are performed on the ORIGA and SCES datasets. The results show area under curve (AUC) of the receiver operating characteristic curve in glaucoma detection at 0.831 and 0.887 in the two databases, much better than state-of-the-art algorithms. The method could be used for glaucoma detection.

  14. Control of a Uniform Step Asymmetrical 9-Level Inverter Based on Artificial Neural Network Strategy

    Directory of Open Access Journals (Sweden)

    Rachid Taleb

    2009-12-01

    Full Text Available A neural implementation of a harmonic elimination strategy for the control auniform step asymmetrical 9-level inverter is proposed and described in this paper. AMulti-Layer Perceptrons (MLP neural network is used to approximate the mappingbetween the modulation rate and the required switching angles. After learning, the neuralnetwork generates the appropriate switching angles for the inverter. This leads to a lowcomputational-cost neural controller which is therefore well suited for real-timeapplications. This neural approach is compared to the well-known Multi-Carrier Pulse-Width Modulation (MCPWM. Simulation results demonstrate the technical advantages ofthe neural implementation of the harmonic elimination strategy over the conventionalmethod for the control of an uniform step asymmetrical 9-level inverter. The approach isused to supply an asynchronous machine and results show that the neural method ensures ahighest quality torque by efficiently canceling the harmonics generated by the inverter.

  15. Generalized Adaptive Artificial Neural Networks

    Science.gov (United States)

    Tawel, Raoul

    1993-01-01

    Mathematical model of supervised learning by artificial neural network provides for simultaneous adjustments of both temperatures of neurons and synaptic weights, and includes feedback as well as feedforward synaptic connections. Extension of mathematical model described in "Adaptive Neurons For Artificial Neural Networks" (NPO-17803). Dynamics of neural network represented in new model by less-restrictive continuous formalism.

  16. Neural Network Ensembles

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Salamon, Peter

    1990-01-01

    We propose several means for improving the performance an training of neural networks for classification. We use crossvalidation as a tool for optimizing network parameters and architecture. We show further that the remaining generalization error can be reduced by invoking ensembles of similar...... networks....

  17. Interval probabilistic neural network.

    Science.gov (United States)

    Kowalski, Piotr A; Kulczycki, Piotr

    2017-01-01

    Automated classification systems have allowed for the rapid development of exploratory data analysis. Such systems increase the independence of human intervention in obtaining the analysis results, especially when inaccurate information is under consideration. The aim of this paper is to present a novel approach, a neural networking, for use in classifying interval information. As presented, neural methodology is a generalization of probabilistic neural network for interval data processing. The simple structure of this neural classification algorithm makes it applicable for research purposes. The procedure is based on the Bayes approach, ensuring minimal potential losses with regard to that which comes about through classification errors. In this article, the topological structure of the network and the learning process are described in detail. Of note, the correctness of the procedure proposed here has been verified by way of numerical tests. These tests include examples of both synthetic data, as well as benchmark instances. The results of numerical verification, carried out for different shapes of data sets, as well as a comparative analysis with other methods of similar conditioning, have validated both the concept presented here and its positive features.

  18. Optimization of multilayer neural network parameters for speaker recognition

    Science.gov (United States)

    Tovarek, Jaromir; Partila, Pavol; Rozhon, Jan; Voznak, Miroslav; Skapa, Jan; Uhrin, Dominik; Chmelikova, Zdenka

    2016-05-01

    This article discusses the impact of multilayer neural network parameters for speaker identification. The main task of speaker identification is to find a specific person in the known set of speakers. It means that the voice of an unknown speaker (wanted person) belongs to a group of reference speakers from the voice database. One of the requests was to develop the text-independent system, which means to classify wanted person regardless of content and language. Multilayer neural network has been used for speaker identification in this research. Artificial neural network (ANN) needs to set parameters like activation function of neurons, steepness of activation functions, learning rate, the maximum number of iterations and a number of neurons in the hidden and output layers. ANN accuracy and validation time are directly influenced by the parameter settings. Different roles require different settings. Identification accuracy and ANN validation time were evaluated with the same input data but different parameter settings. The goal was to find parameters for the neural network with the highest precision and shortest validation time. Input data of neural networks are a Mel-frequency cepstral coefficients (MFCC). These parameters describe the properties of the vocal tract. Audio samples were recorded for all speakers in a laboratory environment. Training, testing and validation data set were split into 70, 15 and 15 %. The result of the research described in this article is different parameter setting for the multilayer neural network for four speakers.

  19. Financial time series prediction using spiking neural networks.

    Science.gov (United States)

    Reid, David; Hussain, Abir Jaafar; Tawfik, Hissam

    2014-01-01

    In this paper a novel application of a particular type of spiking neural network, a Polychronous Spiking Network, was used for financial time series prediction. It is argued that the inherent temporal capabilities of this type of network are suited to non-stationary data such as this. The performance of the spiking neural network was benchmarked against three systems: two "traditional", rate-encoded, neural networks; a Multi-Layer Perceptron neural network and a Dynamic Ridge Polynomial neural network, and a standard Linear Predictor Coefficients model. For this comparison three non-stationary and noisy time series were used: IBM stock data; US/Euro exchange rate data, and the price of Brent crude oil. The experiments demonstrated favourable prediction results for the Spiking Neural Network in terms of Annualised Return and prediction error for 5-Step ahead predictions. These results were also supported by other relevant metrics such as Maximum Drawdown and Signal-To-Noise ratio. This work demonstrated the applicability of the Polychronous Spiking Network to financial data forecasting and this in turn indicates the potential of using such networks over traditional systems in difficult to manage non-stationary environments.

  20. Financial time series prediction using spiking neural networks.

    Directory of Open Access Journals (Sweden)

    David Reid

    Full Text Available In this paper a novel application of a particular type of spiking neural network, a Polychronous Spiking Network, was used for financial time series prediction. It is argued that the inherent temporal capabilities of this type of network are suited to non-stationary data such as this. The performance of the spiking neural network was benchmarked against three systems: two "traditional", rate-encoded, neural networks; a Multi-Layer Perceptron neural network and a Dynamic Ridge Polynomial neural network, and a standard Linear Predictor Coefficients model. For this comparison three non-stationary and noisy time series were used: IBM stock data; US/Euro exchange rate data, and the price of Brent crude oil. The experiments demonstrated favourable prediction results for the Spiking Neural Network in terms of Annualised Return and prediction error for 5-Step ahead predictions. These results were also supported by other relevant metrics such as Maximum Drawdown and Signal-To-Noise ratio. This work demonstrated the applicability of the Polychronous Spiking Network to financial data forecasting and this in turn indicates the potential of using such networks over traditional systems in difficult to manage non-stationary environments.

  1. Discriminating lysosomal membrane protein types using dynamic neural network.

    Science.gov (United States)

    Tripathi, Vijay; Gupta, Dwijendra Kumar

    2014-01-01

    This work presents a dynamic artificial neural network methodology, which classifies the proteins into their classes from their sequences alone: the lysosomal membrane protein classes and the various other membranes protein classes. In this paper, neural networks-based lysosomal-associated membrane protein type prediction system is proposed. Different protein sequence representations are fused to extract the features of a protein sequence, which includes seven feature sets; amino acid (AA) composition, sequence length, hydrophobic group, electronic group, sum of hydrophobicity, R-group, and dipeptide composition. To reduce the dimensionality of the large feature vector, we applied the principal component analysis. The probabilistic neural network, generalized regression neural network, and Elman regression neural network (RNN) are used as classifiers and compared with layer recurrent network (LRN), a dynamic network. The dynamic networks have memory, i.e. its output depends not only on the input but the previous outputs also. Thus, the accuracy of LRN classifier among all other artificial neural networks comes out to be the highest. The overall accuracy of jackknife cross-validation is 93.2% for the data-set. These predicted results suggest that the method can be effectively applied to discriminate lysosomal associated membrane proteins from other membrane proteins (Type-I, Outer membrane proteins, GPI-Anchored) and Globular proteins, and it also indicates that the protein sequence representation can better reflect the core feature of membrane proteins than the classical AA composition.

  2. Fluctuation-response relation unifies dynamical behaviors in neural fields

    Science.gov (United States)

    Fung, C. C. Alan; Wong, K. Y. Michael; Mao, Hongzi; Wu, Si

    2015-08-01

    Anticipation is a strategy used by neural fields to compensate for transmission and processing delays during the tracking of dynamical information and can be achieved by slow, localized, inhibitory feedback mechanisms such as short-term synaptic depression, spike-frequency adaptation, or inhibitory feedback from other layers. Based on the translational symmetry of the mobile network states, we derive generic fluctuation-response relations, providing unified predictions that link their tracking behaviors in the presence of external stimuli to the intrinsic dynamics of the neural fields in their absence.

  3. Nonlinear wind prediction using a fuzzy modular temporal neural network

    Energy Technology Data Exchange (ETDEWEB)

    Wu, G.G. [GeoControl Systems, Inc., Houston, TX (United States); Zhijie Dou [West Texas A& M Univ., Canyon, TX (United States)

    1995-12-31

    This paper introduces a new approach utilizing a fuzzy classifier and a modular temporal neural network to predict wind speed and direction for advanced wind turbine control systems. The fuzzy classifier estimates wind patterns and then assigns weights accordingly to each module of the temporal neural network. A temporal network with the finite-duration impulse response and multiple-layer structure is used to represent the underlying dynamics of physical phenomena. Using previous wind measurements and information given by the classifier, the modular network trained by a standard back-propagation algorithm predicts wind speed and direction effectively. Meanwhile, the feedback from the network helps auto-tuning the classifier.

  4. A Novel Evolutionary Feedforward Neural Network with Artificial Immunology

    Institute of Scientific and Technical Information of China (English)

    宫新保; 臧小刚; 周希朗

    2003-01-01

    A hybrid algorithm to design the multi-layer feedforward neural network was proposed. Evolutionaryprogramming is used to design the network that makes the training process tending to global optima. Artificial im-munology combined with simulated annealing algorithm is used to specify the initial weight vectors, therefore improves the probabiligy of training algorithm to converge to global optima. The applications of the neural networkin the modulation-style recognition of analog modulated rader signals demonstrate the good performance of the net-work.

  5. Manipulator Neural Network Control Based on Fuzzy Genetic Algorithm

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The three-layer forward neural networks are used to establish the inverse kinem a tics models of robot manipulators. The fuzzy genetic algorithm based on the line ar scaling of the fitness value is presented to update the weights of neural net works. To increase the search speed of the algorithm, the crossover probability and the mutation probability are adjusted through fuzzy control and the fitness is modified by the linear scaling method in FGA. Simulations show that the propo sed method improves considerably the precision of the inverse kinematics solutio ns for robot manipulators and guarantees a rapid global convergence and overcome s the drawbacks of SGA and the BP algorithm.

  6. Neural Approach for Calculating Permeability of Porous Medium

    Institute of Scientific and Technical Information of China (English)

    ZHANG Ji-Cheng; LIU Li; SONG Kao-Ping

    2006-01-01

    @@ Permeability is one of the most important properties of porous media. It is considerably difficult to calculate reservoir permeability precisely by using single well-logging response and simple formula because reservoir is of serious heterogeneity, and well-logging response curves are badly affected by many complicated factors underground. We propose a neural network method to calculate permeability of porous media. By improving the algorithm of the back-propagation neural network, convergence speed is enhanced and better results can be achieved. A four-layer back-propagation network is constructed to effectively calculate permeability from well log data.

  7. Functional model of biological neural networks.

    Science.gov (United States)

    Lo, James Ting-Ho

    2010-12-01

    A functional model of biological neural networks, called temporal hierarchical probabilistic associative memory (THPAM), is proposed in this paper. THPAM comprises functional models of dendritic trees for encoding inputs to neurons, a first type of neuron for generating spike trains, a second type of neuron for generating graded signals to modulate neurons of the first type, supervised and unsupervised Hebbian learning mechanisms for easy learning and retrieving, an arrangement of dendritic trees for maximizing generalization, hardwiring for rotation-translation-scaling invariance, and feedback connections with different delay durations for neurons to make full use of present and past informations generated by neurons in the same and higher layers. These functional models and their processing operations have many functions of biological neural networks that have not been achieved by other models in the open literature and provide logically coherent answers to many long-standing neuroscientific questions. However, biological justifications of these functional models and their processing operations are required for THPAM to qualify as a macroscopic model (or low-order approximate) of biological neural networks.

  8. Neural network parameters affecting image classification

    Directory of Open Access Journals (Sweden)

    K.C. Tiwari

    2001-07-01

    Full Text Available The study is to assess the behaviour and impact of various neural network parameters and their effects on the classification accuracy of remotely sensed images which resulted in successful classification of an IRS-1B LISS II image of Roorkee and its surrounding areas using neural network classification techniques. The method can be applied for various defence applications, such as for the identification of enemy troop concentrations and in logistical planning in deserts by identification of suitable areas for vehicular movement. Five parameters, namely training sample size, number of hidden layers, number of hidden nodes, learning rate and momentum factor were selected. In each case, sets of values were decided based on earlier works reported. Neural network-based classifications were carried out for as many as 450 combinations of these parameters. Finally, a graphical analysis of the results obtained was carried out to understand the relationship among these parameters. A table of recommended values for these parameters for achieving 90 per cent and higher classification accuracy was generated and used in classification of an IRS-1B LISS II image. The analysis suggests the existence of an intricate relationship among these parameters and calls for a wider series of classification experiments as also a more intricate analysis of the relationships.

  9. Markovian architectural bias of recurrent neural networks.

    Science.gov (United States)

    Tino, Peter; Cernanský, Michal; Benusková, Lubica

    2004-01-01

    In this paper, we elaborate upon the claim that clustering in the recurrent layer of recurrent neural networks (RNNs) reflects meaningful information processing states even prior to training [1], [2]. By concentrating on activation clusters in RNNs, while not throwing away the continuous state space network dynamics, we extract predictive models that we call neural prediction machines (NPMs). When RNNs with sigmoid activation functions are initialized with small weights (a common technique in the RNN community), the clusters of recurrent activations emerging prior to training are indeed meaningful and correspond to Markov prediction contexts. In this case, the extracted NPMs correspond to a class of Markov models, called variable memory length Markov models (VLMMs). In order to appreciate how much information has really been induced during the training, the RNN performance should always be compared with that of VLMMs and NPMs extracted before training as the "null" base models. Our arguments are supported by experiments on a chaotic symbolic sequence and a context-free language with a deep recursive structure. Index Terms-Complex symbolic sequences, information latching problem, iterative function systems, Markov models, recurrent neural networks (RNNs).

  10. Handwritten Digits Recognition Using Neural Computing

    Directory of Open Access Journals (Sweden)

    Călin Enăchescu

    2009-12-01

    Full Text Available In this paper we present a method for the recognition of handwritten digits and a practical implementation of this method for real-time recognition. A theoretical framework for the neural networks used to classify the handwritten digits is also presented.The classification task is performed using a Convolutional Neural Network (CNN. CNN is a special type of multy-layer neural network, being trained with an optimized version of the back-propagation learning algorithm.CNN is designed to recognize visual patterns directly from pixel images with minimal preprocessing, being capable to recognize patterns with extreme variability (such as handwritten characters, and with robustness to distortions and simple geometric transformations.The main contributions of this paper are related to theoriginal methods for increasing the efficiency of the learning algorithm by preprocessing the images before the learning process and a method for increasing the precision and performance for real-time applications, by removing the non useful information from the background.By combining these strategies we have obtained an accuracy of 96.76%, using as training set the NIST (National Institute of Standards and Technology database.

  11. Three-dimensional thinning by neural networks

    Science.gov (United States)

    Shen, Jun; Shen, Wei

    1995-10-01

    3D thinning is widely used in 3D object representation in computer vision and in trajectory planning in robotics to find the topological structure of the free space. In the present paper, we propose a 3D image thinning method by neural networks. Each voxel in the 3D image corresponds to a set of neurons, called 3D Thinron, in the network. Taking the 3D Thinron as the elementary unit, the global structure of the network is a 3D array in which each Thinron is connected with the 26 neighbors in the neighborhood 3 X 3 X 3. As to the Thinron itself, the set of neurons are organized in multiple layers. In the first layer, we have neurons for boundary analysis, connectivity analysis and connectivity verification, taking as input the voxels in the 3 X 3 X 3 neighborhood and the intermediate outputs of neighboring Thinrons. In the second layer, we have the neurons for synthetical analysis to give the intermediate output of Thinron. In the third layer, we have the decision neurons whose state determines the final output. All neurons in the Thinron are the adaline neurons of Widrow, except the connectivity analysis and verification neurons which are nonlinear neurons. With the 3D Thinron neural network, the state transition of the network will take place automatically, and the network converges to the final steady state, which gives the result medial surface of 3D objects, preserving the connectivity in the initial image. The method presented is simulated and tested for 3D images, experimental results are reported.

  12. Standard Cell-Based Implementation of a Digital Optoelectronic Neural-Network Hardware

    Science.gov (United States)

    Maier, Klaus D.; Beckstein, Clemens; Blickhan, Reinhard; Erhard, Werner

    2001-03-01

    A standard cell-based implementation of a digital optoelectronic neural-network architecture is presented. The overall structure of the multilayer perceptron network that was used, the optoelectronic interconnection system between the layers, and all components required in each layer are defined. The design process from VHDL-based modeling from synthesis and partly automatic placing and routing to the final editing of one layer of the circuit of the multilayer perceptrons are described. A suitable approach for the standard cell-based design of optoelectronic systems is presented, and shortcomings of the design tool that was used are pointed out. The layout for the microelectronic circuit of one layer in a multilayer perceptron neural network with a performance potential 1 magnitude higher than neural networks that are purely electronic based has been successfully designed.

  13. Neural dynamics based on the recognition of neural fingerprints

    Directory of Open Access Journals (Sweden)

    José Luis eCarrillo-Medina

    2015-03-01

    Full Text Available Experimental evidence has revealed the existence of characteristic spiking features in different neural signals, e.g. individual neural signatures identifying the emitter or functional signatures characterizing specific tasks. These neural fingerprints may play a critical role in neural information processing, since they allow receptors to discriminate or contextualize incoming stimuli. This could be a powerful strategy for neural systems that greatly enhances the encoding and processing capacity of these networks. Nevertheless, the study of information processing based on the identification of specific neural fingerprints has attracted little attention. In this work, we study (i the emerging collective dynamics of a network of neurons that communicate with each other by exchange of neural fingerprints and (ii the influence of the network topology on the self-organizing properties within the network. Complex collective dynamics emerge in the network in the presence of stimuli. Predefined inputs, i.e. specific neural fingerprints, are detected and encoded into coexisting patterns of activity that propagate throughout the network with different spatial organization. The patterns evoked by a stimulus can survive after the stimulation is over, which provides memory mechanisms to the network. The results presented in this paper suggest that neural information processing based on neural fingerprints can be a plausible, flexible and powerful strategy.

  14. A gentle introduction to artificial neural networks.

    Science.gov (United States)

    Zhang, Zhongheng

    2016-10-01

    Artificial neural network (ANN) is a flexible and powerful machine learning technique. However, it is under utilized in clinical medicine because of its technical challenges. The article introduces some basic ideas behind ANN and shows how to build ANN using R in a step-by-step framework. In topology and function, ANN is in analogue to the human brain. There are input and output signals transmitting from input to output nodes. Input signals are weighted before reaching output nodes according to their respective importance. Then the combined signal is processed by activation function. I simulated a simple example to illustrate how to build a simple ANN model using nnet() function. This function allows for one hidden layer with varying number of units in that layer. The basic structure of ANN can be visualized with plug-in plot.nnet() function. The plot function is powerful that it allows for varieties of adjustment to the appearance of the neural networks. Prediction with ANN can be performed with predict() function, similar to that of conventional generalized linear models. Finally, the prediction power of ANN is examined using confusion matrix and average accuracy. It appears that ANN is slightly better than conventional linear model.

  15. Brain tumor segmentation with Deep Neural Networks.

    Science.gov (United States)

    Havaei, Mohammad; Davy, Axel; Warde-Farley, David; Biard, Antoine; Courville, Aaron; Bengio, Yoshua; Pal, Chris; Jodoin, Pierre-Marc; Larochelle, Hugo

    2017-01-01

    In this paper, we present a fully automatic brain tumor segmentation method based on Deep Neural Networks (DNNs). The proposed networks are tailored to glioblastomas (both low and high grade) pictured in MR images. By their very nature, these tumors can appear anywhere in the brain and have almost any kind of shape, size, and contrast. These reasons motivate our exploration of a machine learning solution that exploits a flexible, high capacity DNN while being extremely efficient. Here, we give a description of different model choices that we've found to be necessary for obtaining competitive performance. We explore in particular different architectures based on Convolutional Neural Networks (CNN), i.e. DNNs specifically adapted to image data. We present a novel CNN architecture which differs from those traditionally used in computer vision. Our CNN exploits both local features as well as more global contextual features simultaneously. Also, different from most traditional uses of CNNs, our networks use a final layer that is a convolutional implementation of a fully connected layer which allows a 40 fold speed up. We also describe a 2-phase training procedure that allows us to tackle difficulties related to the imbalance of tumor labels. Finally, we explore a cascade architecture in which the output of a basic CNN is treated as an additional source of information for a subsequent CNN. Results reported on the 2013 BRATS test data-set reveal that our architecture improves over the currently published state-of-the-art while being over 30 times faster.

  16. Tuning Recurrent Neural Networks for Recognizing Handwritten Arabic Words

    KAUST Repository

    Qaralleh, Esam

    2013-10-01

    Artificial neural networks have the abilities to learn by example and are capable of solving problems that are hard to solve using ordinary rule-based programming. They have many design parameters that affect their performance such as the number and sizes of the hidden layers. Large sizes are slow and small sizes are generally not accurate. Tuning the neural network size is a hard task because the design space is often large and training is often a long process. We use design of experiments techniques to tune the recurrent neural network used in an Arabic handwriting recognition system. We show that best results are achieved with three hidden layers and two subsampling layers. To tune the sizes of these five layers, we use fractional factorial experiment design to limit the number of experiments to a feasible number. Moreover, we replicate the experiment configuration multiple times to overcome the randomness in the training process. The accuracy and time measurements are analyzed and modeled. The two models are then used to locate network sizes that are on the Pareto optimal frontier. The approach described in this paper reduces the label error from 26.2% to 19.8%.

  17. Neural tube defects

    Directory of Open Access Journals (Sweden)

    M.E. Marshall

    1981-09-01

    Full Text Available Neural tube defects refer to any defect in the morphogenesis of the neural tube, the most common types being spina bifida and anencephaly. Spina bifida has been recognised in skeletons found in north-eastern Morocco and estimated to have an age of almost 12 000 years. It was also known to the ancient Greek and Arabian physicians who thought that the bony defect was due to the tumour. The term spina bifida was first used by Professor Nicolai Tulp of Amsterdam in 1652. Many other terms have been used to describe this defect, but spina bifida remains the most useful general term, as it describes the separation of the vertebral elements in the midline.

  18. Quantum Neural Networks

    CERN Document Server

    Gupta, S; Gupta, Sanjay

    2002-01-01

    This paper initiates the study of quantum computing within the constraints of using a polylogarithmic ($O(\\log^k n), k\\geq 1$) number of qubits and a polylogarithmic number of computation steps. The current research in the literature has focussed on using a polynomial number of qubits. A new mathematical model of computation called \\emph{Quantum Neural Networks (QNNs)} is defined, building on Deutsch's model of quantum computational network. The model introduces a nonlinear and irreversible gate, similar to the speculative operator defined by Abrams and Lloyd. The precise dynamics of this operator are defined and while giving examples in which nonlinear Schr\\"{o}dinger's equations are applied, we speculate on its possible implementation. The many practical problems associated with the current model of quantum computing are alleviated in the new model. It is shown that QNNs of logarithmic size and constant depth have the same computational power as threshold circuits, which are used for modeling neural network...

  19. Neural tissue-spheres

    DEFF Research Database (Denmark)

    Andersen, Rikke K; Johansen, Mathias; Blaabjerg, Morten

    2007-01-01

    maintained their neurogenic potential throughout 77 days of propagation, while the ability of anterior NTS to generate neurons severely declined from day 40. The present procedure describes isolation and long-term expansion of forebrain SVZ tissue with potential preservation of the endogenous cellular......By combining new and established protocols we have developed a procedure for isolation and propagation of neural precursor cells from the forebrain subventricular zone (SVZ) of newborn rats. Small tissue blocks of the SVZ were dissected and propagated en bloc as free-floating neural tissue......-spheres (NTS) in EGF and FGF2 containing medium. The spheres were cut into quarters when passaged every 10-15th day, avoiding mechanical or enzymatic dissociation in order to minimize cellular trauma and preserve intercellular contacts. For analysis of regional differences within the forebrain SVZ, NTS were...

  20. Analysis of neural data

    CERN Document Server

    Kass, Robert E; Brown, Emery N

    2014-01-01

    Continual improvements in data collection and processing have had a huge impact on brain research, producing data sets that are often large and complicated. By emphasizing a few fundamental principles, and a handful of ubiquitous techniques, Analysis of Neural Data provides a unified treatment of analytical methods that have become essential for contemporary researchers. Throughout the book ideas are illustrated with more than 100 examples drawn from the literature, ranging from electrophysiology, to neuroimaging, to behavior. By demonstrating the commonality among various statistical approaches the authors provide the crucial tools for gaining knowledge from diverse types of data. Aimed at experimentalists with only high-school level mathematics, as well as computationally-oriented neuroscientists who have limited familiarity with statistics, Analysis of Neural Data serves as both a self-contained introduction and a reference work.

  1. Neural network and principal component regression in non-destructive soluble solids content assessment:a comparison

    Institute of Scientific and Technical Information of China (English)

    Kim-seng CHIA; Herlina ABDUL RAHIM; Ruzairi ABDUL RAHIM

    2012-01-01

    Visible and near infrared spectroscopy is a non-destructive,green,and rapid technology that can be utilized to estimate the components of interest without conditioning it,as compared with classical analytical methods.The objective of this paper is to compare the performance of artificial neural network (ANN) (a nonlinear model) and principal component regression (PCR) (a linear model) based on visible and shortwave near infrared (VIS-SWNIR) (400-1000 nm) spectra in the non-destructive soluble solids content measurement of an apple.First,we used multiplicative scattering correction to pre-process the spectral data.Second,PCR was applied to estimate the optimal number of input variables.Third,the input variables with an optimal amount were used as the inputs of both multiple linear regression and ANN models.The initial weights and the number of hidden neurons were adjusted to optimize the performance of ANN.Findings suggest that the predictive performance of ANN with two hidden neurons outperforms that of PCR.

  2. Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Kapil Nahar

    2012-12-01

    Full Text Available An artificial neural network is an information-processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information.The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurons working in unison to solve specific problems.Ann’s, like people, learn by example.

  3. Neural networks for triggering

    Energy Technology Data Exchange (ETDEWEB)

    Denby, B. (Fermi National Accelerator Lab., Batavia, IL (USA)); Campbell, M. (Michigan Univ., Ann Arbor, MI (USA)); Bedeschi, F. (Istituto Nazionale di Fisica Nucleare, Pisa (Italy)); Chriss, N.; Bowers, C. (Chicago Univ., IL (USA)); Nesti, F. (Scuola Normale Superiore, Pisa (Italy))

    1990-01-01

    Two types of neural network beauty trigger architectures, based on identification of electrons in jets and recognition of secondary vertices, have been simulated in the environment of the Fermilab CDF experiment. The efficiencies for B's and rejection of background obtained are encouraging. If hardware tests are successful, the electron identification architecture will be tested in the 1991 run of CDF. 10 refs., 5 figs., 1 tab.

  4. Coupled Neural Associative Memories

    OpenAIRE

    Karbasi, Amin; Salavati, Amir Hesam; Shokrollahi, Amin

    2013-01-01

    We propose a novel architecture to design a neural associative memory that is capable of learning a large number of patterns and recalling them later in presence of noise. It is based on dividing the neurons into local clusters and parallel plains, very similar to the architecture of the visual cortex of macaque brain. The common features of our proposed architecture with those of spatially-coupled codes enable us to show that the performance of such networks in eliminating noise is drastical...

  5. Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Kapil Nahar

    2012-12-01

    Full Text Available An artificial neural network is an information-processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurons working in unison to solve specific problems. Ann’s, like people, learn by example.

  6. Artificial neural network modelling

    CERN Document Server

    Samarasinghe, Sandhya

    2016-01-01

    This book covers theoretical aspects as well as recent innovative applications of Artificial Neural networks (ANNs) in natural, environmental, biological, social, industrial and automated systems. It presents recent results of ANNs in modelling small, large and complex systems under three categories, namely, 1) Networks, Structure Optimisation, Robustness and Stochasticity 2) Advances in Modelling Biological and Environmental Systems and 3) Advances in Modelling Social and Economic Systems. The book aims at serving undergraduates, postgraduates and researchers in ANN computational modelling. .

  7. Progress in neural plasticity

    Institute of Scientific and Technical Information of China (English)

    POO; Mu-Ming

    2010-01-01

    One of the properties of the nervous system is the use-dependent plasticity of neural circuits.The structure and function of neural circuits are susceptible to changes induced by prior neuronal activity,as reflected by short-and long-term modifications of synaptic efficacy and neuronal excitability.Regarded as the most attractive cellular mechanism underlying higher cognitive functions such as learning and memory,activity-dependent synaptic plasticity has been in the spotlight of modern neuroscience since 1973 when activity-induced long-term potentiation(LTP) of hippocampal synapses was first discovered.Over the last 10 years,Chinese neuroscientists have made notable contributions to the study of the cellular and molecular mechanisms of synaptic plasticity,as well as of the plasticity beyond synapses,including activity-dependent changes in intrinsic neuronal excitability,dendritic integration functions,neuron-glia signaling,and neural network activity.This work highlight some of these significant findings.

  8. Optoelectronic implementation of multilayer perceptron and Hopfield neural networks

    Science.gov (United States)

    Domanski, Andrzej W.; Olszewski, Mikolaj K.; Wolinski, Tomasz R.

    2004-11-01

    In this paper we present an optoelectronic implementation of two networks based on multilayer perceptron and the Hopfield neural network. We propose two different methods to solve a problem of lack of negative optical signals that are necessary for connections between layers of perceptron as well as within the Hopfield network structure. The first method applied for construction of multilayer perceptron was based on division of signals into two channels and next to use both of them independently as positive and negative signals. The second one, applied for implementation of the Hopfield model, was based on adding of constant value for elements of matrix weight. Both methods of compensation of lack negative optical signals were tested experimentally as optoelectronic models of multilayer perceptron and Hopfield neural network. Special configurations of optical fiber cables and liquid crystal multicell plates were used. In conclusion, possible applications of the optoelectronic neural networks are briefly discussed.

  9. Attractor switching by neural control of chaotic neurodynamics.

    Science.gov (United States)

    Pasemann, F; Stollenwerk, N

    1998-11-01

    Chaotic attractors of discrete-time neural networks include infinitely many unstable periodic orbits, which can be stabilized by small parameter changes in a feedback control. Here we explore the control of unstable periodic orbits in a chaotic neural network with only two neurons. Analytically, a local control algorithm is derived on the basis of least squares minimization of the future deviations between actual system states and the desired orbit. This delayed control allows a consistent neural implementation, i.e. the same types of neurons are used for chaotic and controlling modules. The control signal is realized with one layer of neurons, allowing selective switching between different stabilized periodic orbits. For chaotic modules with noise, random switching between different periodic orbits is observed.

  10. Transient stability Assessment using Artificial Neural Network Considering Fault Location

    Directory of Open Access Journals (Sweden)

    P.K.Olulope

    2010-06-01

    Full Text Available This paper describes the capability of artificial neural network for predicting the critical clearing time of power system. It combines the advantages of time domain integration schemes with artificial neural network for real time transient stability assessment. The training of ANN is done using selected features as input and critical fault clearing time (CCT as desire target. A single contingency was applied and the target CCT was found using time domain simulation. Multi layer feed forward neural network trained with Levenberg Marquardt (LM back propagation algorithm is used to provide the estimated CCT. The effectiveness of ANN, the method is demonstrated on single machine infinite bus system (SMIB. The simulation shows that ANN can provide fast and accurate mapping which makes it applicable to real time scenario.

  11. Adaptive Control of Flexible Redundant Manipulators Using Neural Networks

    Institute of Scientific and Technical Information of China (English)

    SONG Yimin; LI Jianxin; WANG Shiyu; LIU Jianping

    2006-01-01

    An investigation on the neural networks based active vibration control of flexible redundant manipulators was conducted.The smart links of the manipulator were synthesized with the flexible links to which were attached piezoceramic actuators and strain gauge sensors.A nonlinear adaptive control strategy named neural networks based indirect adaptive control (NNIAC) was employed to improve the dynamic performance of the manipulator.The mathematical model of the 4-layered dynamic recurrent neural networks (DRNN) was introduced.The neuro-identifier and the neurocontroller featuring the DRNN topology were designed off line so as to enhance the initial robustness of the NNIAC.By adjusting the neuro-identifier and the neuro-controller alternatively,the manipulator was controlled on line for achieving the desired dynamic performance.Finally,a planar 3R redundant manipulator with one smart link was utilized as an illustrative example.The simulation results proved the validity of the control strategy.

  12. Multi-column Deep Neural Networks for Image Classification

    CERN Document Server

    Cireşan, Dan; Schmidhuber, Juergen

    2012-01-01

    Traditional methods of computer vision and machine learning cannot match human performance on tasks such as the recognition of handwritten digits or traffic signs. Our biologically plausible deep artificial neural network architectures can. Small (often minimal) receptive fields of convolutional winner-take-all neurons yield large network depth, resulting in roughly as many sparsely connected neural layers as found in mammals between retina and visual cortex. Only winner neurons are trained. Several deep neural columns become experts on inputs preprocessed in different ways; their predictions are averaged. Graphics cards allow for fast training. On the very competitive MNIST handwriting benchmark, our method is the first to achieve near-human performance. On a traffic sign recognition benchmark it outperforms humans by a factor of two. We also improve the state-of-the-art on a plethora of common image classification benchmarks.

  13. Statistical modelling of neural networks in {gamma}-spectrometry applications

    Energy Technology Data Exchange (ETDEWEB)

    Vigneron, V.; Martinez, J.M. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France). Dept. de Mecanique et de Technologie; Morel, J.; Lepy, M.C. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France). Dept. des Applications et de la Metrologie des Rayonnements Ionisants

    1995-12-31

    Layered Neural Networks, which are a class of models based on neural computation, are applied to the measurement of uranium enrichment, i.e. the isotope ratio {sup 235} U/({sup 235} U + {sup 236} U + {sup 238} U). The usual method consider a limited number of {Gamma}-ray and X-ray peaks, and require previously calibrated instrumentation for each sample. But, in practice, the source-detector ensemble geometry conditions are critically different, thus a means of improving the above convention methods is to reduce the region of interest: this is possible by focusing on the K{sub {alpha}} X region where the three elementary components are present. Real data are used to study the performance of neural networks. Training is done with a Maximum Likelihood method to measure uranium {sup 235} U and {sup 238} U quantities in infinitely thick samples. (authors). 18 refs., 6 figs., 3 tabs.

  14. Short Term Load Forecast Using Wavelet Neural Network

    Institute of Scientific and Technical Information of China (English)

    Gui Min; Rong Fei; Luo An

    2005-01-01

    This paper presents a wavelet neural network (WNN) model combining wavelet transform and artificial neural networks for short term load forecast (STLF). Both historical load and temperature data having important impacts on load level were used in the proposed forecasting model. The model used the three-layer feed forward network trained by the error back-propagation algorithm. To enhance the forecasting accuracy by neural networks, wavelet multi-resolution analysis method was introduced to pre-process these data and reconstruct the predicted output. The proposed model has been evaluated with actual data of electricity load and temperature of Hunan Province. The simulation results show that the model is capable of providing a reasonable forecasting accuracy in STLF.

  15. Application of dynamic recurrent neural networks in nonlinear system identification

    Science.gov (United States)

    Du, Yun; Wu, Xueli; Sun, Huiqin; Zhang, Suying; Tian, Qiang

    2006-11-01

    An adaptive identification method of simple dynamic recurrent neural network (SRNN) for nonlinear dynamic systems is presented in this paper. This method based on the theory that by using the inner-states feed-back of dynamic network to describe the nonlinear kinetic characteristics of system can reflect the dynamic characteristics more directly, deduces the recursive prediction error (RPE) learning algorithm of SRNN, and improves the algorithm by studying topological structure on recursion layer without the weight values. The simulation results indicate that this kind of neural network can be used in real-time control, due to its less weight values, simpler learning algorithm, higher identification speed, and higher precision of model. It solves the problems of intricate in training algorithm and slow rate in convergence caused by the complicate topological structure in usual dynamic recurrent neural network.

  16. Earth slope reliability analysis under seismic loadings using neural network

    Institute of Scientific and Technical Information of China (English)

    PENG Huai-sheng; DENG Jian; GU De-sheng

    2005-01-01

    A new method was proposed to cope with the earth slope reliability problem under seismic loadings. The algorithm integrates the concepts of artificial neural network, the first order second moment reliability method and the deterministic stability analysis method of earth slope. The performance function and its derivatives in slope stability analysis under seismic loadings were approximated by a trained multi-layer feed-forward neural network with differentiable transfer functions. The statistical moments calculated from the performance function values and the corresponding gradients using neural network were then used in the first order second moment method for the calculation of the reliability index in slope safety analysis. Two earth slope examples were presented for illustrating the applicability of the proposed approach. The new method is effective in slope reliability analysis. And it has potential application to other reliability problems of complicated engineering structure with a considerably large number of random variables.

  17. Innovation in Layer-by-Layer Assembly.

    Science.gov (United States)

    Richardson, Joseph J; Cui, Jiwei; Björnmalm, Mattias; Braunger, Julia A; Ejima, Hirotaka; Caruso, Frank

    2016-12-14

    Methods for depositing thin films are important in generating functional materials for diverse applications in a wide variety of fields. Over the last half-century, the layer-by-layer assembly of nanoscale films has received intense and growing interest. This has been fueled by innovation in the available materials and assembly technologies, as well as the film-characterization techniques. In this Review, we explore, discuss, and detail innovation in layer-by-layer assembly in terms of past and present developments, and we highlight how these might guide future advances. A particular focus is on conventional and early developments that have only recently regained interest in the layer-by-layer assembly field. We then review unconventional assemblies and approaches that have been gaining popularity, which include inorganic/organic hybrid materials, cells and tissues, and the use of stereocomplexation, patterning, and dip-pen lithography, to name a few. A relatively recent development is the use of layer-by-layer assembly materials and techniques to assemble films in a single continuous step. We name this "quasi"-layer-by-layer assembly and discuss the impacts and innovations surrounding this approach. Finally, the application of characterization methods to monitor and evaluate layer-by-layer assembly is discussed, as innovation in this area is often overlooked but is essential for development of the field. While we intend for this Review to be easily accessible and act as a guide to researchers new to layer-by-layer assembly, we also believe it will provide insight to current researchers in the field and help guide future developments and innovation.

  18. Trimaran Resistance Artificial Neural Network

    Science.gov (United States)

    2011-01-01

    11th International Conference on Fast Sea Transportation FAST 2011, Honolulu, Hawaii, USA, September 2011 Trimaran Resistance Artificial Neural Network Richard...Trimaran Resistance Artificial Neural Network 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e... Artificial Neural Network and is restricted to the center and side-hull configurations tested. The value in the parametric model is that it is able to

  19. Resolution of Singularities Introduced by Hierarchical Structure in Deep Neural Networks.

    Science.gov (United States)

    Nitta, Tohru

    2016-06-30

    We present a theoretical analysis of singular points of artificial deep neural networks, resulting in providing deep neural network models having no critical points introduced by a hierarchical structure. It is considered that such deep neural network models have good nature for gradient-based optimization. First, we show that there exist a large number of critical points introduced by a hierarchical structure in deep neural networks as straight lines, depending on the number of hidden layers and the number of hidden neurons. Second, we derive a sufficient condition for deep neural networks having no critical points introduced by a hierarchical structure, which can be applied to general deep neural networks. It is also shown that the existence of critical points introduced by a hierarchical structure is determined by the rank and the regularity of weight matrices for a specific class of deep neural networks. Finally, two kinds of implementation methods of the sufficient conditions to have no critical points are provided. One is a learning algorithm that can avoid critical points introduced by the hierarchical structure during learning (called avoidant learning algorithm). The other is a neural network that does not have some critical points introduced by the hierarchical structure as an inherent property (called avoidant neural network).

  20. A Constructive Algorithm for Feedforward Neural Networks for Medical Diagnostic Reasoning

    CERN Document Server

    Siddiquee, Abu Bakar; Kamruzzaman, S M

    2010-01-01

    This research is to search for alternatives to the resolution of complex medical diagnosis where human knowledge should be apprehended in a general fashion. Successful application examples show that human diagnostic capabilities are significantly worse than the neural diagnostic system. Our research describes a constructive neural network algorithm with backpropagation; offer an approach for the incremental construction of nearminimal neural network architectures for pattern classification. The algorithm starts with minimal number of hidden units in the single hidden layer; additional units are added to the hidden layer one at a time to improve the accuracy of the network and to get an optimal size of a neural network. Our algorithm was tested on several benchmarking classification problems including Cancer1, Heart, and Diabetes with good generalization ability.

  1. Prediction Model of Weekly Retail Price for Eggs Based on Chaotic Neural Network

    Institute of Scientific and Technical Information of China (English)

    LI Zhe-min; CUI Li-guo; XU Shi-wei; WENG Ling-yun; DONG Xiao-xia; LI Gan-qiong; YU Hai-peng

    2013-01-01

    This paper establishes a short-term prediction model of weekly retail prices for eggs based on chaotic neural network with the weekly retail prices of eggs from January 2008 to December 2012 in China. In the process of determining the structure of the chaotic neural network, the number of input layer nodes of the network is calculated by reconstructing phase space and computing its saturated embedding dimension, and then the number of hidden layer nodes is estimated by trial and error. Finally, this model is applied to predict the retail prices of eggs and compared with ARIMA. The result shows that the chaotic neural network has better nonlinear iftting ability and higher precision in the prediction of weekly retail price of eggs. The empirical result also shows that the chaotic neural network can be widely used in the ifeld of short-term prediction of agricultural prices.

  2. [Artificial neural networks in Neurosciences].

    Science.gov (United States)

    Porras Chavarino, Carmen; Salinas Martínez de Lecea, José María

    2011-11-01

    This article shows that artificial neural networks are used for confirming the relationships between physiological and cognitive changes. Specifically, we explore the influence of a decrease of neurotransmitters on the behaviour of old people in recognition tasks. This artificial neural network recognizes learned patterns. When we change the threshold of activation in some units, the artificial neural network simulates the experimental results of old people in recognition tasks. However, the main contributions of this paper are the design of an artificial neural network and its operation inspired by the nervous system and the way the inputs are coded and the process of orthogonalization of patterns.

  3. Using neural networks for prediction of nuclear parameters

    Energy Technology Data Exchange (ETDEWEB)

    Pereira Filho, Leonidas; Souto, Kelling Cabral, E-mail: leonidasmilenium@hotmail.com, E-mail: kcsouto@bol.com.br [Instituto Federal de Educacao, Ciencia e Tecnologia do Rio de Janeiro (IFRJ), Rio de Janeiro, RJ (Brazil); Machado, Marcelo Dornellas, E-mail: dornemd@eletronuclear.gov.br [Eletrobras Termonuclear S.A. (GCN.T/ELETRONUCLEAR), Rio de Janeiro, RJ (Brazil). Gerencia de Combustivel Nuclear

    2013-07-01

    Dating from 1943, the earliest work on artificial neural networks (ANN), when Warren Mc Cullock and Walter Pitts developed a study on the behavior of the biological neuron, with the goal of creating a mathematical model. Some other work was done until after the 80 witnessed an explosion of interest in ANNs, mainly due to advances in technology, especially microelectronics. Because ANNs are able to solve many problems such as approximation, classification, categorization, prediction and others, they have numerous applications in various areas, including nuclear. Nodal method is adopted as a tool for analyzing core parameters such as boron concentration and pin power peaks for pressurized water reactors. However, this method is extremely slow when it is necessary to perform various core evaluations, for example core reloading optimization. To overcome this difficulty, in this paper a model of Multi-layer Perceptron (MLP) artificial neural network type backpropagation will be trained to predict these values. The main objective of this work is the development of Multi-layer Perceptron (MLP) artificial neural network capable to predict, in very short time, with good accuracy, two important parameters used in the core reloading problem - Boron Concentration and Power Peaking Factor. For the training of the neural networks are provided loading patterns and nuclear data used in cycle 19 of Angra 1 nuclear power plant. Three models of networks are constructed using the same input data and providing the following outputs: 1- Boron Concentration and Power Peaking Factor, 2 - Boron Concentration and 3 - Power Peaking Factor. (author)

  4. THE USE OF NEURAL NETWORK TECHNOLOGY TO MODEL SWIMMING PERFORMANCE

    Directory of Open Access Journals (Sweden)

    António José Silva

    2007-03-01

    Full Text Available The aims of the present study were: to identify the factors which are able to explain the performance in the 200 meters individual medley and 400 meters front crawl events in young swimmers, to model the performance in those events using non-linear mathematic methods through artificial neural networks (multi-layer perceptrons and to assess the neural network models precision to predict the performance. A sample of 138 young swimmers (65 males and 73 females of national level was submitted to a test battery comprising four different domains: kinanthropometric evaluation, dry land functional evaluation (strength and flexibility, swimming functional evaluation (hydrodynamics, hydrostatic and bioenergetics characteristics and swimming technique evaluation. To establish a profile of the young swimmer non-linear combinations between preponderant variables for each gender and swim performance in the 200 meters medley and 400 meters font crawl events were developed. For this purpose a feed forward neural network was used (Multilayer Perceptron with three neurons in a single hidden layer. The prognosis precision of the model (error lower than 0.8% between true and estimated performances is supported by recent evidence. Therefore, we consider that the neural network tool can be a good approach in the resolution of complex problems such as performance modeling and the talent identification in swimming and, possibly, in a wide variety of sports

  5. Artificial Neural Networks for Thermochemical Conversion of Biomass

    DEFF Research Database (Denmark)

    Puig Arnavat, Maria; Bruno, Joan Carles

    2015-01-01

    Artificial neural networks (ANNs), extensively used in different fields, have been applied for modeling biomass gasification processes in fluidized bed reactors. Two ANN models are presented, one for circulating fluidized bed gasifiers and another for bubbling fluidized bed gasifiers. Both models...... determine the producer gas composition and gas yield, using the biomass composition and only a few operating parameters in the input layer. Each model is composed of five ANNs with two neurons in the hidden layer. The backpropagation algorithm is used to train them with published experimental data from...... other authors. The obtained results show that the percentage composition of the main four gas species in producer gas (CO, CO2, H2, CH4) and producer gas yield for a biomass fluidized bed gasifier, can be successfully predicted by applying neural networks. The results obtained show high agreement...

  6. A multilayer neural network model for perception of rotational motion

    Institute of Scientific and Technical Information of China (English)

    郭爱克; 孙海坚; 杨先一

    1997-01-01

    A multilayer neural nerwork model for the perception of rotational motion has been developed usingReichardt’s motion detector array of correlation type, Kohonen’s self-organized feature map and Schuster-Wagner’s oscillating neural network. It is shown that the unsupervised learning could make the neurons on the second layer of the network tend to be self-organized in a form resembling columnar organization of selective directions in area MT of the primate’s visual cortex. The output layer can interpret rotation information and give the directions and velocities of rotational motion. The computer simulation results are in agreement with some psychophysical observations of rotation-al perception. It is demonstrated that the temporal correlation between the oscillating neurons would be powerful for solving the "binding problem" of shear components of rotational motion.

  7. Decentralized Identification and Control in Real-Time of a Robot Manipulator via Recurrent Wavelet First-Order Neural Network

    Directory of Open Access Journals (Sweden)

    Luis A. Vázquez

    2015-01-01

    Full Text Available A decentralized recurrent wavelet first-order neural network (RWFONN structure is presented. The use of a wavelet Morlet activation function allows proposing a neural structure in continuous time of a single layer and a single neuron in order to identify online in a series-parallel configuration, using the filtered error (FE training algorithm, the dynamics behavior of each joint for a two-degree-of-freedom (DOF vertical robot manipulator, whose parameters such as friction and inertia are unknown. Based on the RWFONN subsystem, a decentralized neural controller is designed via backstepping approach. The performance of the decentralized wavelet neural controller is validated via real-time results.

  8. A neural network model of ventriloquism effect and aftereffect.

    Directory of Open Access Journals (Sweden)

    Elisa Magosso

    Full Text Available Presenting simultaneous but spatially discrepant visual and auditory stimuli induces a perceptual translocation of the sound towards the visual input, the ventriloquism effect. General explanation is that vision tends to dominate over audition because of its higher spatial reliability. The underlying neural mechanisms remain unclear. We address this question via a biologically inspired neural network. The model contains two layers of unimodal visual and auditory neurons, with visual neurons having higher spatial resolution than auditory ones. Neurons within each layer communicate via lateral intra-layer synapses; neurons across layers are connected via inter-layer connections. The network accounts for the ventriloquism effect, ascribing it to a positive feedback between the visual and auditory neurons, triggered by residual auditory activity at the position of the visual stimulus. Main results are: i the less localized stimulus is strongly biased toward the most localized stimulus and not vice versa; ii amount of the ventriloquism effect changes with visual-auditory spatial disparity; iii ventriloquism is a robust behavior of the network with respect to parameter value changes. Moreover, the model implements Hebbian rules for potentiation and depression of lateral synapses, to explain ventriloquism aftereffect (that is, the enduring sound shift after exposure to spatially disparate audio-visual stimuli. By adaptively changing the weights of lateral synapses during cross-modal stimulation, the model produces post-adaptive shifts of auditory localization that agree with in-vivo observations. The model demonstrates that two unimodal layers reciprocally interconnected may explain ventriloquism effect and aftereffect, even without the presence of any convergent multimodal area. The proposed study may provide advancement in understanding neural architecture and mechanisms at the basis of visual-auditory integration in the spatial realm.

  9. Genetic algorithm based support vector machine regression in predicting wave transmission of horizontally interlaced multi-layer moored floating pipe breakwater

    Digital Repository Service at National Institute of Oceanography (India)

    Patil, S.G.; Mandal, S.; Hegde, A.V.

    , number of hidden layers and neurons by trial and error, which is time consuming. To overcome the problems inherent in ANN training procedures Jeng et al., [19] adopted the concept of genetic algorithm based training of ANN models, which provided...-fuzzy inference system (ANFIS), which is a five-layer feed-forward neural network, which includes fuzzification layer, rule layer, normalization layer, defuzzification layer and a single summation neuron . It is a hybrid neuro-fuzzy technique that brings...

  10. 垂体瘤患者视网膜神经纤维层厚度和视野变化参数相关性研究%Correlation between retinal neural fiber layer and visual field defect in pituitary tumor patients

    Institute of Scientific and Technical Information of China (English)

    施维; 钟勇; 董方田; 艾凤荣; 赵鹏

    2008-01-01

    Objective To study the correlation of retinal neural fiber layer and visual field defect and compare the sensitivity between GDxVCC and Octopus automated perimetry in the diagnosis and follow up of pituitary tumor patients.Methods 70 pituitary tumor patients(140 eyes),which were diagnosed with endocrinologic examination,MRI or surgical sample pathology in Peking Union Medical College hospital are included in the current study.These patients were examined with both GDxVcc and Octopus automated perimetry(VF).The correlation between the two methods with six parameters in 3 pairs(mean thickness,superior mean thickness and inferior mean thickness of GDxVcc,in comparision with mean sensitivity and square root of lossvariance of VF, nerve fiber indicator and mean defect respectively)were selected.The data of GDxVcc for neural fiber structure and VF for psychophysical function of visual pathway in pituitary tumor patients were analyzed.All the patients received VF test by Octopus 101 perimeter with the TOP program.Results VF for PPT patients posseses a sensitivity of 70.00%,specificity of 34.00%;GDxVCC has a sensitivity of 72.22%,specificity of 56.00%:the sensitivity of GDxVXX and VF combo method is 83.33%,while the specificity of combo method is 58.00%(with statistically significantly different from GDxVxx or VF alone).The correlation in terms of all paired parameters from GDxVCC and VF are statistically significant.The RNFL mean thickness,superior mean thickness,inferior mean thickness of GDxVcc are positively correlated with the mean sensitivity(MS),modified superior and inferior MS(P<0.01);left eyes changes showed the strongest correlation between the inferior sectoral RNFL and the superior VF indexes(P<0.01).On the other hand,the RNFL mean thickness of GDxVXX is negatively correlated to the square root of the loss variance(LS)of VF(P<0.01).Nerve fiber indicator showed Dositively correlated with the mean defect(MD)(P<0.01).Conclusions The combination of GDx

  11. Artificial Neural Network applied to lightning flashes

    Science.gov (United States)

    Gin, R. B.; Guedes, D.; Bianchi, R.

    2013-05-01

    The development of video cameras enabled cientists to study lightning discharges comportment with more precision. The main goal of this project is to create a system able to detect images of lightning discharges stored in videos and classify them using an Artificial Neural Network (ANN)using C Language and OpenCV libraries. The developed system, can be split in two different modules: detection module and classification module. The detection module uses OpenCV`s computer vision libraries and image processing techniques to detect if there are significant differences between frames in a sequence, indicating that something, still not classified, occurred. Whenever there is a significant difference between two consecutive frames, two main algorithms are used to analyze the frame image: brightness and shape algorithms. These algorithms detect both shape and brightness of the event, removing irrelevant events like birds, as well as detecting the relevant events exact position, allowing the system to track it over time. The classification module uses a neural network to classify the relevant events as horizontal or vertical lightning, save the event`s images and calculates his number of discharges. The Neural Network was implemented using the backpropagation algorithm, and was trained with 42 training images , containing 57 lightning events (one image can have more than one lightning). TheANN was tested with one to five hidden layers, with up to 50 neurons each. The best configuration achieved a success rate of 95%, with one layer containing 20 neurons (33 test images with 42 events were used in this phase). This configuration was implemented in the developed system to analyze 20 video files, containing 63 lightning discharges previously manually detected. Results showed that all the lightning discharges were detected, many irrelevant events were unconsidered, and the event's number of discharges was correctly computed. The neural network used in this project achieved a

  12. The Ridge Function Representation of Polynomials and an Application to Neural Networks

    Institute of Scientific and Technical Information of China (English)

    Ting Fan XIE; Fei Long CAO

    2011-01-01

    The first goal of this paper is to establish some properties of the ridge function representation for multivariate polynomials,and the second one is to apply these results to the problem of approximation by neural networks.We find that for continuous functions,the rate of approximation obtained by a neural network with one hidden layer is no slower than that of an algebraic polynomial.

  13. Prediction Model of Soil Nutrients Loss Based on Artificial Neural Network

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    On the basis of Artificial Neural Network theory, a back propagation neural network with one middle layer is building in this paper, and its algorithms is also given, Using this BP network model, study the case of Malian - River basin. The results by calculating show that the solution based on BP algorithms are consis tent with those based multiple-variables linear regression model. They also indicate that BP model in this paper is reasonable and BP algorithms are feasible.

  14. A Neural Network with Minimal Structure for Maglev System Modeling and Control

    OpenAIRE

    1999-01-01

    6 pages; International audience; The paper is concerned with the determination of a minimal structure of a one hidden layer perceptron for system identification and control. Structural identification is a key issue in neural modeling. Decreasing the size of the neural networks is a way to avoid overfitting and bad generalization and leads moreover to simpler models which are required for real time applications, particularly in control. A learning algorithm and a pruning method both based on a...

  15. The use of global image characteristics for neural network pattern recognitions

    Science.gov (United States)

    Kulyas, Maksim O.; Kulyas, Oleg L.; Loshkarev, Aleksey S.

    2017-04-01

    The recognition system is observed, where the information is transferred by images of symbols generated by a television camera. For descriptors of objects the coefficients of two-dimensional Fourier transformation generated in a special way. For solution of the task of classification the one-layer neural network trained on reference images is used. Fast learning of a neural network with a single neuron calculation of coefficients is applied.

  16. A hardware implementation of artificial neural networks using field programmable gate arrays

    Science.gov (United States)

    Won, E.

    2007-11-01

    An artificial neural network algorithm is implemented using a low-cost field programmable gate array hardware. One hidden layer is used in the feed-forward neural network structure in order to discriminate one class of patterns from the other class in real time. In this work, the training of the network is performed in the off-line computing environment and the results of the training are configured to the hardware in order to minimize the latency of the neural computation. With five 8-bit input patterns, six hidden nodes, and one 8-bit output, the implemented hardware neural network makes decisions on a set of input patterns in 11 clock cycles, or less than 200 ns with a 60 MHz clock. The result from the hardware neural computation is well predictable based on the off-line computation. This implementation may be used in level 1 hardware triggers in high energy physics experiments.

  17. Double Glow Plasma Surface Alloying Process Modeling Using Artificial Neural Networks

    Institute of Scientific and Technical Information of China (English)

    Jiang XU; Xishan XIE; Zhong XU

    2003-01-01

    A model is developed for predicting the correlation between processing parameters and the technical target of double glowby applying artificial neural network (ANN). The input parameters of the neural network (NN) are source voltage, workpiecevoltage, working pressure and distance between source electrode and workpiece. The output of the NN model is three importanttechnical targets, namely the gross element content, the thickness of surface alloying layer and the absorption rate (the ratioof the mass loss of source materials to the increasing mass of workpiece) in the processing of double glow plasma surfacealloying. The processing parameters and technical target are then used as a training set for an artificial neural network. Themodel is based on multiplayer feedforward neural network. A very good performance of the neural network is achieved and thecalculated results are in good agreement with the experimental ones.

  18. Radial basis function neural network for power system load-flow

    Energy Technology Data Exchange (ETDEWEB)

    Karami, A.; Mohammadi, M.S. [Faculty of Engineering, The University of Guilan, P.O. Box 41635-3756, Rasht (Iran)

    2008-01-15

    This paper presents a method for solving the load-flow problem of the electric power systems using radial basis function (RBF) neural network with a fast hybrid training method. The main idea is that some operating conditions (values) are needed to solve the set of non-linear algebraic equations of load-flow by employing an iterative numerical technique. Therefore, we may view the outputs of a load-flow program as functions of the operating conditions. Indeed, we are faced with a function approximation problem and this can be done by an RBF neural network. The proposed approach has been successfully applied to the 10-machine and 39-bus New England test system. In addition, this method has been compared with that of a multi-layer perceptron (MLP) neural network model. The simulation results show that the RBF neural network is a simpler method to implement and requires less training time to converge than the MLP neural network. (author)

  19. Serotonin-immunoreactive neural system and contractile system in the hydroid Cladonema (Cnidaria, Hydrozoa).

    Science.gov (United States)

    Mayorova, T D; Kosevich, I A

    2013-12-01

    Serotonin is a widespread neurotransmitter which is present in almost all animal phyla including lower metazoans such as Cnidaria. Serotonin detected in the polyps of several cnidarian species participates in the functioning of a neural system. It was suggested that serotonin coordinates polyp behavior. For example, serotonin may be involved in muscle contraction and/or cnidocyte discharge. However, the role of serotonin in cnidarians is not revealed completely yet. The aim of this study was to investigate the neural system of Cladonema radiatum polyps. We detected the net of serotonin-positive processes within the whole hydranth body using anti-serotonin antibodies. The hypostome and tentacles had denser neural net in comparison with the gastric region. Electron microscopy revealed muscle processes throughout the hydranth body. Neural processes with specific vesicles and neurotubules in their cytoplasm were also shown at an ultrastructural level. This work demonstrates the structure of serotonin-positive neural system and smooth muscle layer in C. radiatum hydranths.

  20. Determination of Activation Functions in A Feedforward Neural Network by using Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Oğuz ÜSTÜN

    2009-03-01

    Full Text Available In this study, activation functions of all layers of the multilayered feedforward neural network have been determined by using genetic algorithm. The main criteria that show the efficiency of the neural network is to approximate to the desired output with the same number nodes and connection weights. One of the important parameter to determine this performance is to choose a proper activation function. In the classical neural network designing, a network is designed by choosing one of the generally known activation function. In the presented study, a table has been generated for the activation functions. The ideal activation function for each node has been chosen from this table by using the genetic algorithm. Two dimensional regression problem clusters has been used to compare the performance of the classical static neural network and the genetic algorithm based neural network. Test results reveal that the proposed method has a high level approximation capacity.

  1. Practical target recognition in infrared imagery using a neural network

    Science.gov (United States)

    Crowe, Alistair A.; Patel, A.; Wright, William A.; Green, Michael A.; Hughes, Andrew D.

    1992-07-01

    This paper describes work undertaken by British Aerospace (BAe) on the development of a neural network classifier for automatic recognition of land based targets in infrared imagery. The classifier used a histogram segmentation process to extract regions from the infrared imagery. A set of features were calculated for each region to form a feature vector describing the region. These feature vectors were then used as the input to the neural classifier. Two neural classifiers were investigated based upon the multi-layer perceptron and radial basis function networks. In order to assess the merits of a neural network approach, the neural classifiers were compared with a conventional classifier originally developed by British Aerospace (Systems and Equipment) Ltd., under contract to RARDE (Chertsey), for the purpose of infrared target recognition. This conventional system was based upon a Schurman classifier which operates on data transformed using a Hotelling Trace Transform. The ability of the classifiers to perform practical recognition of real-world targets was evaluated by training and testing the classifiers on real imagery obtained from mock land battles and military vehicle trials.

  2. Neural networks within multi-core optic fibers.

    Science.gov (United States)

    Cohen, Eyal; Malka, Dror; Shemer, Amir; Shahmoon, Asaf; Zalevsky, Zeev; London, Michael

    2016-07-07

    Hardware implementation of artificial neural networks facilitates real-time parallel processing of massive data sets. Optical neural networks offer low-volume 3D connectivity together with large bandwidth and minimal heat production in contrast to electronic implementation. Here, we present a conceptual design for in-fiber optical neural networks. Neurons and synapses are realized as individual silica cores in a multi-core fiber. Optical signals are transferred transversely between cores by means of optical coupling. Pump driven amplification in erbium-doped cores mimics synaptic interactions. We simulated three-layered feed-forward neural networks and explored their capabilities. Simulations suggest that networks can differentiate between given inputs depending on specific configurations of amplification; this implies classification and learning capabilities. Finally, we tested experimentally our basic neuronal elements using fibers, couplers, and amplifiers, and demonstrated that this configuration implements a neuron-like function. Therefore, devices similar to our proposed multi-core fiber could potentially serve as building blocks for future large-scale small-volume optical artificial neural networks.

  3. Mandarin Chinese Tone Recognition with an Artificial Neural Network

    Institute of Scientific and Technical Information of China (English)

    XU Li; ZHANG Wenle; ZHOU Ning; LEE Chaoyang; LI Yongxin; CHEN Xiuwu; ZHAO Xiaoyan

    2006-01-01

    Mandarin Chinese tone patterns vary in one of the four ways, i.e, (1) high level; (2) rising; (3) low falling and rising; and (4) high falling. The present study is to examine the efficacy of an artificial neural network in recognizing these tone patterns. Speech data were recorded from 12 children (3-6 years of age) and 15 adults. All subjects were native Mandarin Chinese speakers. The fundamental frequencies (FO) of each monosyllabic word of the speech data were extracted with an autocorrelation method. The pitch data(i.e., the FO contours) were the inputs to a feed-forward backpropagation artificial neural network. The number of inputs to the neural network varied from 1 to 16 and the hidden layer of the network contained neurons that varied from 1 to 16 in number. The output of the network consisted of four neurons representing the four tone patterns of Mandarin Chinese. After being trained with the Levenberg-Marquardt optimization, the neural network was able to successfully classify the tone patterns with an accuracy of about 90% correct for speech samples from both adults and children. The artificial neural network may provide an objective and effective way of assessing tone production in prelingually-deafened children who have received cochlear implants.

  4. Neural networks within multi-core optic fibers

    Science.gov (United States)

    Cohen, Eyal; Malka, Dror; Shemer, Amir; Shahmoon, Asaf; Zalevsky, Zeev; London, Michael

    2016-07-01

    Hardware implementation of artificial neural networks facilitates real-time parallel processing of massive data sets. Optical neural networks offer low-volume 3D connectivity together with large bandwidth and minimal heat production in contrast to electronic implementation. Here, we present a conceptual design for in-fiber optical neural networks. Neurons and synapses are realized as individual silica cores in a multi-core fiber. Optical signals are transferred transversely between cores by means of optical coupling. Pump driven amplification in erbium-doped cores mimics synaptic interactions. We simulated three-layered feed-forward neural networks and explored their capabilities. Simulations suggest that networks can differentiate between given inputs depending on specific configurations of amplification; this implies classification and learning capabilities. Finally, we tested experimentally our basic neuronal elements using fibers, couplers, and amplifiers, and demonstrated that this configuration implements a neuron-like function. Therefore, devices similar to our proposed multi-core fiber could potentially serve as building blocks for future large-scale small-volume optical artificial neural networks.

  5. Adaptive control of system with hysteresis using neural networks

    Institute of Scientific and Technical Information of China (English)

    Li Chuntao; Tan Yonghong

    2006-01-01

    An adaptive control scheme is developed for a class of single-input nonlinear systems preceded by unknown hysteresis, which is a non-differentiable and multi-value mapping nonlinearity. The controller based on the three-layer neural network (NN), whose weights are derived from Lyapunov stability analysis, guarantees closed-loop semiglobal stability and convergence of the tracking errors to a small residual set. An example is used to confirm the effectiveness of the proposed control scheme.

  6. Use of simulated neural networks of aerial image classification

    Science.gov (United States)

    Medina, Frances I.; Vasquez, Ramon

    1991-01-01

    The utility of one layer neural network in aerial image classification is examined. The network was trained with the delta rule. This method was shown to be useful as a classifier in aerial images with good resolution. It is fast, it is easy to implement, because it is distribution-free, nothing about statistical distribution of the data is needed, and it is very efficient as a boundary detector.

  7. Successful prediction of horse racing results using a neural network

    OpenAIRE

    Allinson, N. M.; Merritt, D.

    1991-01-01

    Most application work within neural computing continues to employ multi-layer perceptrons (MLP). Though many variations of the fully interconnected feed-forward MLP, and even more variations of the back propagation learning rule, exist; the first section of the paper attempts to highlight several properties of these standard networks. The second section outlines an application-namely the prediction of horse racing results

  8. Process for forming synapses in neural networks and resistor therefor

    Science.gov (United States)

    Fu, C.Y.

    1996-07-23

    Customizable neural network in which one or more resistors form each synapse is disclosed. All the resistors in the synaptic array are identical, thus simplifying the processing issues. Highly doped, amorphous silicon is used as the resistor material, to create extremely high resistances occupying very small spaces. Connected in series with each resistor in the array is at least one severable conductor whose uppermost layer has a lower reflectivity of laser energy than typical metal conductors at a desired laser wavelength. 5 figs.

  9. Process for forming synapses in neural networks and resistor therefor

    Energy Technology Data Exchange (ETDEWEB)

    Fu, Chi Y. (San Francisco, CA)

    1996-01-01

    Customizable neural network in which one or more resistors form each synapse. All the resistors in the synaptic array are identical, thus simplifying the processing issues. Highly doped, amorphous silicon is used as the resistor material, to create extremely high resistances occupying very small spaces. Connected in series with each resistor in the array is at least one severable conductor whose uppermost layer has a lower reflectivity of laser energy than typical metal conductors at a desired laser wavelength.

  10. Wavelet neural network and its application in fault diagnosis of rolling bearing

    Science.gov (United States)

    Wang, Guo-Feng; Wang, Tai-Yong

    2005-12-01

    In order to realize diagnosis of rolling bearing of rotating machines, the wavelet neural network was proposed. This kind of artificial neural network takes wavelet function as neuron of hidden layer so as to realize nonlinear mapping between fault and symptoms. A algorithm based on minimum mean square error was given to obtain the weight value of network, dilation and translation parameter of wavelet function. To testify the correctness of wavelet neural network, it was adopted in diagnosing the fault type and location of rolling bearing. The final result shows that it can recognize the fault of outer race, inner race and roller accurately.

  11. Mesh Generation from Dense 3D Scattered Data Using Neural Network

    Institute of Scientific and Technical Information of China (English)

    ZHANGWei; JIANGXian-feng; CHENLi-neng; MAYa-liang

    2004-01-01

    An improved self-organizing feature map (SOFM) neural network is presented to generate rectangular and hexagonal lattic with normal vector attached to each vertex. After the neural network was trained, the whole scattered data were divided into sub-regions where classified core were represented by the weight vectors of neurons at the output layer of neural network. The weight vectors of the neurons were used to approximate the dense 3-D scattered points, so the dense scattered points could be reduced to a reasonable scale, while the topological feature of the whole scattered points were remained.

  12. A SIMULATION OF THE PENICILLIN G PRODUCTION BIOPROCESS APPLYING NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    A.J.G. da Cruz

    1997-12-01

    Full Text Available The production of penicillin G by Penicillium chrysogenum IFO 8644 was simulated employing a feedforward neural network with three layers. The neural network training procedure used an algorithm combining two procedures: random search and backpropagation. The results of this approach were very promising, and it was observed that the neural network was able to accurately describe the nonlinear behavior of the process. Besides, the results showed that this technique can be successfully applied to control process algorithms due to its long processing time and its flexibility in the incorporation of new data

  13. The Application of Imperialist Competitive Algorithm based on Chaos Theory in Perceptron Neural Network

    Science.gov (United States)

    Zhang, Xiuping

    In this paper, the weights of a Neural Network using Chaotic Imperialist Competitive Algorithm are updated. A three-layered Perseptron Neural Network applied for prediction of the maximum worth of the stocks changed in TEHRAN's bourse market. We trained this neural network with CICA, ICA, PSO and GA algorithms and compared the results with each other. The consideration of the results showed that the training and test error of the network trained by the CICA algorithm has been reduced in comparison to the other three methods.

  14. Application of simple dynamic recurrent neural networks in solid granule flowrate modeling

    Science.gov (United States)

    Du, Yun; Sun, Huiqin; Tian, Qiang; Ren, Haiping; Zhang, Suying

    2008-10-01

    To build the solid granule flowrate model by the simple dynamic recurrent neural network (SRNN) is presented in this paper. Because of the dynamic recurrent neural network has the characteristic of intricate network structure and slow training algorithm rate, the simple recurrent neural network without the weight values on recursion layer is studied. The recurrent prediction error (RPE) learning algorithm for SRNN by adjustment the weight value and the threshold value is reduced. The modeling result of solid granule flowrate indicates that it has fast convergence rate and the high precision the model. It can be used on real time.

  15. Dynamic versus static neural network model for rainfall forecasting at Klang River Basin, Malaysia

    Directory of Open Access Journals (Sweden)

    A. El-Shafie

    2011-07-01

    Full Text Available Rainfall is considered as one of the major component of the hydrological process, it takes significant part of evaluating drought and flooding events. Therefore, it is important to have accurate model for rainfall forecasting. Recently, several data-driven modeling approaches have been investigated to perform such forecasting task such as Multi-Layer Perceptron Neural Networks (MLP-NN. In fact, the rainfall time series modeling involves an important temporal dimension. On the other hand, the classical MLP-NN is a static and memoryless network architecture that is effective for complex nonlinear static mapping. This research focuses on investigating the potential of introducing a neural network that could address the temporal relationships of the rainfall series.

    Two different static neural networks and one dynamic neural network namely; Multi-Layer Peceptron Neural network (MLP-NN, Radial Basis Function Neural Network (RBFNN and Input Delay Neural Network (IDNN, respectively, have been examined in this study. Those models had been developed for two time horizon in monthly and weekly rainfall basis forecasting at Klang River, Malaysia. Data collected over 12 yr (1997–2008 on weekly basis and 22 yr (1987–2008 for monthly basis were used to develop and examine the performance of the proposed models. Comprehensive comparison analyses were carried out to evaluate the performance of the proposed static and dynamic neural network. Results showed that MLP-NN neural network model able to follow the similar trend of the actual rainfall, yet it still relatively poor. RBFNN model achieved better accuracy over the MLP-NN model. Moreover, the forecasting accuracy of the IDNN model outperformed during training and testing stage which prove a consistent level of accuracy with seen and unseen data. Furthermore, the IDNN significantly enhance the forecasting accuracy if compared with the other static neural network model as they could memorize the

  16. A Tensor Neural Network with Layerwise Pretraining:Towards Effective Answer Retrieval

    Institute of Scientific and Technical Information of China (English)

    Xin-Qi Bao; Yun-Fang Wu∗

    2016-01-01

    In this paper we address the answer retrieval problem in community-based question answering. To fully capture the interactions between question-answer pairs, we propose an original tensor neural network to model the relevance between them. The question and candidate answers are separately embedded into different latent semantic spaces, and a 3-way tensor is then utilized to model the interactions between latent semantics. To initialize the network layers properly, we propose a novel algorithm called denoising tensor autoencoder (DTAE), and then implement a layerwise pretraining strategy using denoising autoencoders (DAE) on word embedding layers and DTAE on the tensor layer. The experimental results show that our tensor neural network outperforms various baselines with other competitive neural network methods, and our pretraining DTAE strategy improves the system’s performance and robustness.

  17. Implementation of neural network for color properties of polycarbonates

    Science.gov (United States)

    Saeed, U.; Ahmad, S.; Alsadi, J.; Ross, D.; Rizvi, G.

    2014-05-01

    In present paper, the applicability of artificial neural networks (ANN) is investigated for color properties of plastics. The neural networks toolbox of Matlab 6.5 is used to develop and test the ANN model on a personal computer. An optimal design is completed for 10, 12, 14,16,18 & 20 hidden neurons on single hidden layer with five different algorithms: batch gradient descent (GD), batch variable learning rate (GDX), resilient back-propagation (RP), scaled conjugate gradient (SCG), levenberg-marquardt (LM) in the feed forward back-propagation neural network model. The training data for ANN is obtained from experimental measurements. There were twenty two inputs including resins, additives & pigments while three tristimulus color values L*, a* and b* were used as output layer. Statistical analysis in terms of Root-Mean-Squared (RMS), absolute fraction of variance (R squared), as well as mean square error is used to investigate the performance of ANN. LM algorithm with fourteen neurons on hidden layer in Feed Forward Back-Propagation of ANN model has shown best result in the present study. The degree of accuracy of the ANN model in reduction of errors is proven acceptable in all statistical analysis and shown in results. However, it was concluded that ANN provides a feasible method in error reduction in specific color tristimulus values.

  18. YAP/TAZ enhance mammalian embryonic neural stem cell characteristics in a Tead-dependent manner.

    Science.gov (United States)

    Han, Dasol; Byun, Sung-Hyun; Park, Soojeong; Kim, Juwan; Kim, Inhee; Ha, Soobong; Kwon, Mookwang; Yoon, Keejung

    2015-02-27

    Mammalian brain development is regulated by multiple signaling pathways controlling cell proliferation, migration and differentiation. Here we show that YAP/TAZ enhance embryonic neural stem cell characteristics in a cell autonomous fashion using diverse experimental approaches. Introduction of retroviral vectors expressing YAP or TAZ into the mouse embryonic brain induced cell localization in the ventricular zone (VZ), which is the embryonic neural stem cell niche. This change in cell distribution in the cortical layer is due to the increased stemness of infected cells; YAP-expressing cells were colabeled with Sox2, a neural stem cell marker, and YAP/TAZ increased the frequency and size of neurospheres, indicating enhanced self-renewal- and proliferative ability of neural stem cells. These effects appear to be TEA domain family transcription factor (Tead)-dependent; a Tead binding-defective YAP mutant lost the ability to promote neural stem cell characteristics. Consistently, in utero gene transfer of a constitutively active form of Tead2 (Tead2-VP16) recapitulated all the features of YAP/TAZ overexpression, and dominant negative Tead2-EnR resulted in marked cell exit from the VZ toward outer cortical layers. Taken together, these results indicate that the Tead-dependent YAP/TAZ signaling pathway plays important roles in neural stem cell maintenance by enhancing stemness of neural stem cells during mammalian brain development. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Proposal for an All-Spin Artificial Neural Network: Emulating Neural and Synaptic Functionalities Through Domain Wall Motion in Ferromagnets.

    Science.gov (United States)

    Sengupta, Abhronil; Shim, Yong; Roy, Kaushik

    2016-12-01

    Non-Boolean computing based on emerging post-CMOS technologies can potentially pave the way for low-power neural computing platforms. However, existing work on such emerging neuromorphic architectures have either focused on solely mimicking the neuron, or the synapse functionality. While memristive devices have been proposed to emulate biological synapses, spintronic devices have proved to be efficient at performing the thresholding operation of the neuron at ultra-low currents. In this work, we propose an All-Spin Artificial Neural Network where a single spintronic device acts as the basic building block of the system. The device offers a direct mapping to synapse and neuron functionalities in the brain while inter-layer network communication is accomplished via CMOS transistors. To the best of our knowledge, this is the first demonstration of a neural architecture where a single nanoelectronic device is able to mimic both neurons and synapses. The ultra-low voltage operation of low resistance magneto-metallic neurons enables the low-voltage operation of the array of spintronic synapses, thereby leading to ultra-low power neural architectures. Device-level simulations, calibrated to experimental results, was used to drive the circuit and system level simulations of the neural network for a standard pattern recognition problem. Simulation studies indicate energy savings by  ∼  100× in comparison to a corresponding digital/analog CMOS neuron implementation.

  20. Neural networks in astronomy.

    Science.gov (United States)

    Tagliaferri, Roberto; Longo, Giuseppe; Milano, Leopoldo; Acernese, Fausto; Barone, Fabrizio; Ciaramella, Angelo; De Rosa, Rosario; Donalek, Ciro; Eleuteri, Antonio; Raiconi, Giancarlo; Sessa, Salvatore; Staiano, Antonino; Volpicelli, Alfredo

    2003-01-01

    In the last decade, the use of neural networks (NN) and of other soft computing methods has begun to spread also in the astronomical community which, due to the required accuracy of the measurements, is usually reluctant to use automatic tools to perform even the most common tasks of data reduction and data mining. The federation of heterogeneous large astronomical databases which is foreseen in the framework of the astrophysical virtual observatory and national virtual observatory projects, is, however, posing unprecedented data mining and visualization problems which will find a rather natural and user friendly answer in artificial intelligence tools based on NNs, fuzzy sets or genetic algorithms. This review is aimed to both astronomers (who often have little knowledge of the methodological background) and computer scientists (who often know little about potentially interesting applications), and therefore will be structured as follows: after giving a short introduction to the subject, we shall summarize the methodological background and focus our attention on some of the most interesting fields of application, namely: object extraction and classification, time series analysis, noise identification, and data mining. Most of the original work described in the paper has been performed in the framework of the AstroNeural collaboration (Napoli-Salerno).

  1. Analysis of neural networks

    CERN Document Server

    Heiden, Uwe

    1980-01-01

    The purpose of this work is a unified and general treatment of activity in neural networks from a mathematical pOint of view. Possible applications of the theory presented are indica­ ted throughout the text. However, they are not explored in de­ tail for two reasons : first, the universal character of n- ral activity in nearly all animals requires some type of a general approach~ secondly, the mathematical perspicuity would suffer if too many experimental details and empirical peculiarities were interspersed among the mathematical investigation. A guide to many applications is supplied by the references concerning a variety of specific issues. Of course the theory does not aim at covering all individual problems. Moreover there are other approaches to neural network theory (see e.g. Poggio-Torre, 1978) based on the different lev­ els at which the nervous system may be viewed. The theory is a deterministic one reflecting the average be­ havior of neurons or neuron pools. In this respect the essay is writt...

  2. Neural relativity principle

    Science.gov (United States)

    Koulakov, Alexei

    Olfaction is the final frontier of our senses - the one that is still almost completely mysterious to us. Despite extensive genetic and perceptual data, and a strong push to solve the neural coding problem, fundamental questions about the sense of smell remain unresolved. Unlike vision and hearing, where relatively straightforward relationships between stimulus features and neural responses have been foundational to our understanding sensory processing, it has been difficult to quantify the properties of odorant molecules that lead to olfactory percepts. In a sense, we do not have olfactory analogs of ``red'', ``green'' and ``blue''. The seminal work of Linda Buck and Richard Axel identified a diverse family of about 1000 receptor molecules that serve as odorant sensors in the nose. However, the properties of smells that these receptors detect remain a mystery. I will review our current understanding of the molecular properties important to the olfactory system. I will also describe a theory that explains how odorant identity can be preserved despite substantial changes in the odorant concentration.

  3. The Neural Web of War

    NARCIS (Netherlands)

    Kennis, M.

    2016-01-01

    The aim of this thesis was to gain more insight in the neural network alterations that may underlie PTSD and trauma-focused therapy outcome. To investigate TheNeural Web of War brain scans of healthy civilians (n=26), and veterans with (n=58) and without (n=29) PTSD were assessed. Structural and fun

  4. The Neural Support Vector Machine

    NARCIS (Netherlands)

    Wiering, Marco; van der Ree, Michiel; Embrechts, Mark; Stollenga, Marijn; Meijster, Arnold; Nolte, A; Schomaker, Lambertus

    2013-01-01

    This paper describes a new machine learning algorithm for regression and dimensionality reduction tasks. The Neural Support Vector Machine (NSVM) is a hybrid learning algorithm consisting of neural networks and support vector machines (SVMs). The output of the NSVM is given by SVMs that take a

  5. Neural Networks for Optimal Control

    DEFF Research Database (Denmark)

    Sørensen, O.

    1995-01-01

    Two neural networks are trained to act as an observer and a controller, respectively, to control a non-linear, multi-variable process.......Two neural networks are trained to act as an observer and a controller, respectively, to control a non-linear, multi-variable process....

  6. The Neural Support Vector Machine

    NARCIS (Netherlands)

    Wiering, Marco; van der Ree, Michiel; Embrechts, Mark; Stollenga, Marijn; Meijster, Arnold; Nolte, A; Schomaker, Lambertus

    2013-01-01

    This paper describes a new machine learning algorithm for regression and dimensionality reduction tasks. The Neural Support Vector Machine (NSVM) is a hybrid learning algorithm consisting of neural networks and support vector machines (SVMs). The output of the NSVM is given by SVMs that take a centr

  7. Neural Networks for Optimal Control

    DEFF Research Database (Denmark)

    Sørensen, O.

    1995-01-01

    Two neural networks are trained to act as an observer and a controller, respectively, to control a non-linear, multi-variable process.......Two neural networks are trained to act as an observer and a controller, respectively, to control a non-linear, multi-variable process....

  8. VOLTAGE COMPENSATION USING ARTIFICIAL NEURAL NETWORK

    African Journals Online (AJOL)

    VOLTAGE COMPENSATION USING ARTIFICIAL NEURAL NETWORK: A CASE STUDY OF RUMUOLA DISTRIBUTION NETWORK. ... The artificial neural networks controller engaged to controlling the dynamic voltage ... Article Metrics.

  9. Neural fields theory and applications

    CERN Document Server

    Graben, Peter; Potthast, Roland; Wright, James

    2014-01-01

    With this book, the editors present the first comprehensive collection in neural field studies, authored by leading scientists in the field - among them are two of the founding-fathers of neural field theory. Up to now, research results in the field have been disseminated across a number of distinct journals from mathematics, computational neuroscience, biophysics, cognitive science and others. Starting with a tutorial for novices in neural field studies, the book comprises chapters on emergent patterns, their phase transitions and evolution, on stochastic approaches, cortical development, cognition, robotics and computation, large-scale numerical simulations, the coupling of neural fields to the electroencephalogram and phase transitions in anesthesia. The intended readership are students and scientists in applied mathematics, theoretical physics, theoretical biology, and computational neuroscience. Neural field theory and its applications have a long-standing tradition in the mathematical and computational ...

  10. Effective connectivity of hippocampal neural network and its alteration in Mg2+-free epilepsy model.

    Science.gov (United States)

    Gong, Xin-Wei; Li, Jing-Bo; Lu, Qin-Chi; Liang, Pei-Ji; Zhang, Pu-Ming

    2014-01-01

    Understanding the connectivity of the brain neural network and its evolution in epileptiform discharges is meaningful in the epilepsy researches and treatments. In the present study, epileptiform discharges were induced in rat hippocampal slices perfused with Mg2+-free artificial cerebrospinal fluid. The effective connectivity of the hippocampal neural network was studied by comparing the normal and epileptiform discharges recorded by a microelectrode array. The neural network connectivity was constructed by using partial directed coherence and analyzed by graph theory. The transition of the hippocampal network topology from control to epileptiform discharges was demonstrated. Firstly, differences existed in both the averaged in- and out-degree between nodes in the pyramidal cell layer and the granule cell layer, which indicated an information flow from the pyramidal cell layer to the granule cell layer during epileptiform discharges, whereas no consistent information flow was observed in control. Secondly, the neural network showed different small-worldness in the early, middle and late stages of the epileptiform discharges, whereas the control network did not show the small-world property. Thirdly, the network connectivity began to change earlier than the appearance of epileptiform discharges and lasted several seconds after the epileptiform discharges disappeared. These results revealed the important network bases underlying the transition from normal to epileptiform discharges in hippocampal slices. Additionally, this work indicated that the network analysis might provide a useful tool to evaluate the neural network and help to improve the prediction of seizures.

  11. The Use of Neural Network Technology to Model Swimming Performance

    Science.gov (United States)

    Silva, António José; Costa, Aldo Manuel; Oliveira, Paulo Moura; Reis, Victor Machado; Saavedra, José; Perl, Jurgen; Rouboa, Abel; Marinho, Daniel Almeida

    2007-01-01

    The aims of the present study were: to identify the factors which are able to explain the performance in the 200 meters individual medley and 400 meters front crawl events in young swimmers, to model the performance in those events using non-linear mathematic methods through artificial neural networks (multi-layer perceptrons) and to assess the neural network models precision to predict the performance. A sample of 138 young swimmers (65 males and 73 females) of national level was submitted to a test battery comprising four different domains: kinanthropometric evaluation, dry land functional evaluation (strength and flexibility), swimming functional evaluation (hydrodynamics, hydrostatic and bioenergetics characteristics) and swimming technique evaluation. To establish a profile of the young swimmer non-linear combinations between preponderant variables for each gender and swim performance in the 200 meters medley and 400 meters font crawl events were developed. For this purpose a feed forward neural network was used (Multilayer Perceptron) with three neurons in a single hidden layer. The prognosis precision of the model (error lower than 0.8% between true and estimated performances) is supported by recent evidence. Therefore, we consider that the neural network tool can be a good approach in the resolution of complex problems such as performance modeling and the talent identification in swimming and, possibly, in a wide variety of sports. Key pointsThe non-linear analysis resulting from the use of feed forward neural network allowed us the development of four performance models.The mean difference between the true and estimated results performed by each one of the four neural network models constructed was low.The neural network tool can be a good approach in the resolution of the performance modeling as an alternative to the standard statistical models that presume well-defined distributions and independence among all inputs.The use of neural networks for sports

  12. Artificial neural network models for biomass gasification in fluidized bed gasifiers

    DEFF Research Database (Denmark)

    Puig Arnavat, Maria; Hernández, J. Alfredo; Bruno, Joan Carles

    2013-01-01

    bed gasifier can be successfully predicted by applying neural networks. ANNs models use in the input layer the biomass composition and few operating parameters, two neurons in the hidden layer and the backpropagation algorithm. The results obtained by these ANNs show high agreement with published......Artificial neural networks (ANNs) have been applied for modeling biomass gasification process in fluidized bed reactors. Two architectures of ANNs models are presented; one for circulating fluidized bed gasifiers (CFB) and the other for bubbling fluidized bed gasifiers (BFB). Both models determine...

  13. Combining Neural Networks for Skin Detection

    CERN Document Server

    Doukim, Chelsia Amy; Chekima, Ali; Omatu, Sigeru

    2011-01-01

    Two types of combining strategies were evaluated namely combining skin features and combining skin classifiers. Several combining rules were applied where the outputs of the skin classifiers are combined using binary operators such as the AND and the OR operators, "Voting", "Sum of Weights" and a new neural network. Three chrominance components from the YCbCr colour space that gave the highest correct detection on their single feature MLP were selected as the combining parameters. A major issue in designing a MLP neural network is to determine the optimal number of hidden units given a set of training patterns. Therefore, a "coarse to fine search" method to find the number of neurons in the hidden layer is proposed. The strategy of combining Cb/Cr and Cr features improved the correct detection by 3.01% compared to the best single feature MLP given by Cb-Cr. The strategy of combining the outputs of three skin classifiers using the "Sum of Weights" rule further improved the correct detection by 4.38% compared t...

  14. Granular neural networks, pattern recognition and bioinformatics

    CERN Document Server

    Pal, Sankar K; Ganivada, Avatharam

    2017-01-01

    This book provides a uniform framework describing how fuzzy rough granular neural network technologies can be formulated and used in building efficient pattern recognition and mining models. It also discusses the formation of granules in the notion of both fuzzy and rough sets. Judicious integration in forming fuzzy-rough information granules based on lower approximate regions enables the network to determine the exactness in class shape as well as to handle the uncertainties arising from overlapping regions, resulting in efficient and speedy learning with enhanced performance. Layered network and self-organizing analysis maps, which have a strong potential in big data, are considered as basic modules,. The book is structured according to the major phases of a pattern recognition system (e.g., classification, clustering, and feature selection) with a balanced mixture of theory, algorithm, and application. It covers the latest findings as well as directions for future research, particularly highlighting bioinf...

  15. Neural correlates of consciousness.

    Science.gov (United States)

    Negrao, B L; Viljoen, M

    2009-11-01

    A basic understanding of consciousness and its neural correlates is of major importance for all clinicians, especially those involved with patients with altered states of consciousness. In this paper it is shown that consciousness is dependent on the brainstem and thalamus for arousal; that basic cognition is supported by recurrent electrical activity between the cortex and the thalamus at gamma band frequencies; aand that some kind of working memory must, at least fleetingly, be present for awareness to occur. The problem of cognitive binding and the role of attention are briefly addressed and it shown that consciousness depends on a multitude of subconscious processes. Although these processes do not represent consciousness, consciousness cannot exist without them.

  16. Neural Darwinism and consciousness.

    Science.gov (United States)

    Seth, Anil K; Baars, Bernard J

    2005-03-01

    Neural Darwinism (ND) is a large scale selectionist theory of brain development and function that has been hypothesized to relate to consciousness. According to ND, consciousness is entailed by reentrant interactions among neuronal populations in the thalamocortical system (the 'dynamic core'). These interactions, which permit high-order discriminations among possible core states, confer selective advantages on organisms possessing them by linking current perceptual events to a past history of value-dependent learning. Here, we assess the consistency of ND with 16 widely recognized properties of consciousness, both physiological (for example, consciousness is associated with widespread, relatively fast, low amplitude interactions in the thalamocortical system), and phenomenal (for example, consciousness involves the existence of a private flow of events available only to the experiencing subject). While no theory accounts fully for all of these properties at present, we find that ND and its recent extensions fare well.

  17. Resting-state hemodynamics are spatiotemporally coupled to synchronized and symmetric neural activity in excitatory neurons.

    Science.gov (United States)

    Ma, Ying; Shaik, Mohammed A; Kozberg, Mariel G; Kim, Sharon H; Portes, Jacob P; Timerman, Dmitriy; Hillman, Elizabeth M C

    2016-12-27

    Brain hemodynamics serve as a proxy for neural activity in a range of noninvasive neuroimaging techniques including functional magnetic resonance imaging (fMRI). In resting-state fMRI, hemodynamic fluctuations have been found to exhibit patterns of bilateral synchrony, with correlated regions inferred to have functional connectivity. However, the relationship between resting-state hemodynamics and underlying neural activity has not been well established, making the neural underpinnings of functional connectivity networks unclear. In this study, neural activity and hemodynamics were recorded simultaneously over the bilateral cortex of awake and anesthetized Thy1-GCaMP mice using wide-field optical mapping. Neural activity was visualized via selective expression of the calcium-sensitive fluorophore GCaMP in layer 2/3 and 5 excitatory neurons. Characteristic patterns of resting-state hemodynamics were accompanied by more rapidly changing bilateral patterns of resting-state neural activity. Spatiotemporal hemodynamics could be modeled by convolving this neural activity with hemodynamic response functions derived through both deconvolution and gamma-variate fitting. Simultaneous imaging and electrophysiology confirmed that Thy1-GCaMP signals are well-predicted by multiunit activity. Neurovascular coupling between resting-state neural activity and hemodynamics was robust and fast in awake animals, whereas coupling in urethane-anesthetized animals was slower, and in some cases included lower-frequency (neural activity. The patterns of bilaterally-symmetric spontaneous neural activity revealed by wide-field Thy1-GCaMP imaging may depict the neural foundation of functional connectivity networks detected in resting-state fMRI.

  18. Using Elman recurrent neural networks with conjugate gradient algorithm in determining the anesthetic the amount of anesthetic medicine to be applied.

    Science.gov (United States)

    Güntürkün, Rüştü

    2010-08-01

    In this study, Elman recurrent neural networks have been defined by using conjugate gradient algorithm in order to determine the depth of anesthesia in the continuation stage of the anesthesia and to estimate the amount of medicine to be applied at that moment. The feed forward neural networks are also used for comparison. The conjugate gradient algorithm is compared with back propagation (BP) for training of the neural Networks. The applied artificial neural network is composed of three layers, namely the input layer, the hidden layer and the output layer. The nonlinear activation function sigmoid (sigmoid function) has been used in the hidden layer and the output layer. EEG data has been recorded with Nihon Kohden 9200 brand 22-channel EEG device. The international 8-channel bipolar 10-20 montage system (8 TB-b system) has been used in assembling the recording electrodes. EEG data have been recorded by being sampled once in every 2 milliseconds. The artificial neural network has been designed so as to have 60 neurons in the input layer, 30 neurons in the hidden layer and 1 neuron in the output layer. The values of the power spectral density (PSD) of 10-second EEG segments which correspond to the 1-50 Hz frequency range; the ratio of the total power of PSD values of the EEG segment at that moment in the same range to the total of PSD values of EEG segment taken prior to the anesthesia.

  19. VSWI Wetlands Advisory Layer

    Data.gov (United States)

    Vermont Center for Geographic Information — This dataset represents the DEC Wetlands Program's Advisory layer. This layer makes the most up-to-date, non-jurisdictional, wetlands mapping avaiable to the public...

  20. Basic Ozone Layer Science

    Science.gov (United States)

    Learn about the ozone layer and how human activities deplete it. This page provides information on the chemical processes that lead to ozone layer depletion, and scientists' efforts to understand them.

  1. Ozone Layer Protection

    Science.gov (United States)

    ... Search Search Ozone Layer Protection Share Facebook Twitter Google+ Pinterest Contact Us Ozone Layer Protection Welcome to ... Managing Refrigerant Emissions Stationary Refrigeration and Air Conditioning Car and Other Mobile Air Conditioning GreenChill Partnership Responsible ...

  2. A Spiking Neural Simulator Integrating Event-Driven and Time-Driven Computation Schemes Using Parallel CPU-GPU Co-Processing: A Case Study.

    Science.gov (United States)

    Naveros, Francisco; Luque, Niceto R; Garrido, Jesús A; Carrillo, Richard R; Anguita, Mancia; Ros, Eduardo

    2015-07-01

    Time-driven simulation methods in traditional CPU architectures perform well and precisely when simulating small-scale spiking neural networks. Nevertheless, they still have drawbacks when simulating large-scale systems. Conversely, event-driven simulation methods in CPUs and time-driven simulation methods in graphic processing units (GPUs) can outperform CPU time-driven methods under certain conditions. With this performance improvement in mind, we have developed an event-and-time-driven spiking neural network simulator suitable for a hybrid CPU-GPU platform. Our neural simulator is able to efficiently simulate bio-inspired spiking neural networks consisting of different neural models, which can be distributed heterogeneously in both small layers and large layers or subsystems. For the sake of efficiency, the low-activity parts of the neural network can be simulated in CPU using event-driven methods while the high-activity subsystems can be simulated in either CPU (a few neurons) or GPU (thousands or millions of neurons) using time-driven methods. In this brief, we have undertaken a comparative study of these different simulation methods. For benchmarking the different simulation methods and platforms, we have used a cerebellar-inspired neural-network model consisting of a very dense granular layer and a Purkinje layer with a smaller number of cells (according to biological ratios). Thus, this cerebellar-like network includes a dense diverging neural layer (increasing the dimensionality of its internal representation and sparse coding) and a converging neural layer (integration) similar to many other biologically inspired and also artificial neural networks.

  3. Artificial Neural Network Analysis System

    Science.gov (United States)

    2007-11-02

    Contract No. DASG60-00-M-0201 Purchase request no.: Foot in the Door-01 Title Name: Artificial Neural Network Analysis System Company: Atlantic... Artificial Neural Network Analysis System 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Powell, Bruce C 5d. PROJECT NUMBER 5e. TASK NUMBER...34) 27-02-2001 Report Type N/A Dates Covered (from... to) ("DD MON YYYY") 28-10-2000 27-02-2001 Title and Subtitle Artificial Neural Network Analysis

  4. Cooperating attackers in neural cryptography.

    Science.gov (United States)

    Shacham, Lanir N; Klein, Einat; Mislovaty, Rachel; Kanter, Ido; Kinzel, Wolfgang

    2004-06-01

    A successful attack strategy in neural cryptography is presented. The neural cryptosystem, based on synchronization of neural networks by mutual learning, has been recently shown to be secure under different attack strategies. The success of the advanced attacker presented here, called the "majority-flipping attacker," does not decay with the parameters of the model. This attacker's outstanding success is due to its using a group of attackers which cooperate throughout the synchronization process, unlike any other attack strategy known. An analytical description of this attack is also presented, and fits the results of simulations.

  5. YAP/TAZ enhance mammalian embryonic neural stem cell characteristics in a Tead-dependent manner

    Energy Technology Data Exchange (ETDEWEB)

    Han, Dasol; Byun, Sung-Hyun; Park, Soojeong; Kim, Juwan; Kim, Inhee; Ha, Soobong; Kwon, Mookwang; Yoon, Keejung, E-mail: keejung@skku.edu

    2015-02-27

    Mammalian brain development is regulated by multiple signaling pathways controlling cell proliferation, migration and differentiation. Here we show that YAP/TAZ enhance embryonic neural stem cell characteristics in a cell autonomous fashion using diverse experimental approaches. Introduction of retroviral vectors expressing YAP or TAZ into the mouse embryonic brain induced cell localization in the ventricular zone (VZ), which is the embryonic neural stem cell niche. This change in cell distribution in the cortical layer is due to the increased stemness of infected cells; YAP-expressing cells were colabeled with Sox2, a neural stem cell marker, and YAP/TAZ increased the frequency and size of neurospheres, indicating enhanced self-renewal- and proliferative ability of neural stem cells. These effects appear to be TEA domain family transcription factor (Tead)–dependent; a Tead binding-defective YAP mutant lost the ability to promote neural stem cell characteristics. Consistently, in utero gene transfer of a constitutively active form of Tead2 (Tead2-VP16) recapitulated all the features of YAP/TAZ overexpression, and dominant negative Tead2-EnR resulted in marked cell exit from the VZ toward outer cortical layers. Taken together, these results indicate that the Tead-dependent YAP/TAZ signaling pathway plays important roles in neural stem cell maintenance by enhancing stemness of neural stem cells during mammalian brain development. - Highlights: • Roles of YAP and Tead in vivo during mammalian brain development are clarified. • Expression of YAP promotes embryonic neural stem cell characteristics in vivo in a cell autonomous fashion. • Enhancement of neural stem cell characteristics by YAP depends on Tead. • Transcriptionally active form of Tead alone can recapitulate the effects of YAP. • Transcriptionally repressive form of Tead severely reduces stem cell characteristics.

  6. Dynamic versus static neural network model for rainfall forecasting at Klang River Basin, Malaysia

    Directory of Open Access Journals (Sweden)

    A. El-Shafie

    2012-04-01

    Full Text Available Rainfall is considered as one of the major components of the hydrological process; it takes significant part in evaluating drought and flooding events. Therefore, it is important to have an accurate model for rainfall forecasting. Recently, several data-driven modeling approaches have been investigated to perform such forecasting tasks as multi-layer perceptron neural networks (MLP-NN. In fact, the rainfall time series modeling involves an important temporal dimension. On the other hand, the classical MLP-NN is a static and has a memoryless network architecture that is effective for complex nonlinear static mapping. This research focuses on investigating the potential of introducing a neural network that could address the temporal relationships of the rainfall series.

    Two different static neural networks and one dynamic neural network, namely the multi-layer perceptron neural network (MLP-NN, radial basis function neural network (RBFNN and input delay neural network (IDNN, respectively, have been examined in this study. Those models had been developed for the two time horizons for monthly and weekly rainfall forecasting at Klang River, Malaysia. Data collected over 12 yr (1997–2008 on a weekly basis and 22 yr (1987–2008 on a monthly basis were used to develop and examine the performance of the proposed models. Comprehensive comparison analyses were carried out to evaluate the performance of the proposed static and dynamic neural networks. Results showed that the MLP-NN neural network model is able to follow trends of the actual rainfall, however, not very accurately. RBFNN model achieved better accuracy than the MLP-NN model. Moreover, the forecasting accuracy of the IDNN model was better than that of static network during both training and testing stages, which proves a consistent level of accuracy with seen and unseen data.

  7. Dynamic versus static neural network model for rainfall forecasting at Klang River Basin, Malaysia

    Science.gov (United States)

    El-Shafie, A.; Noureldin, A.; Taha, M.; Hussain, A.; Mukhlisin, M.

    2012-04-01

    Rainfall is considered as one of the major components of the hydrological process; it takes significant part in evaluating drought and flooding events. Therefore, it is important to have an accurate model for rainfall forecasting. Recently, several data-driven modeling approaches have been investigated to perform such forecasting tasks as multi-layer perceptron neural networks (MLP-NN). In fact, the rainfall time series modeling involves an important temporal dimension. On the other hand, the classical MLP-NN is a static and has a memoryless network architecture that is effective for complex nonlinear static mapping. This research focuses on investigating the potential of introducing a neural network that could address the temporal relationships of the rainfall series. Two different static neural networks and one dynamic neural network, namely the multi-layer perceptron neural network (MLP-NN), radial basis function neural network (RBFNN) and input delay neural network (IDNN), respectively, have been examined in this study. Those models had been developed for the two time horizons for monthly and weekly rainfall forecasting at Klang River, Malaysia. Data collected over 12 yr (1997-2008) on a weekly basis and 22 yr (1987-2008) on a monthly basis were used to develop and examine the performance of the proposed models. Comprehensive comparison analyses were carried out to evaluate the performance of the proposed static and dynamic neural networks. Results showed that the MLP-NN neural network model is able to follow trends of the actual rainfall, however, not very accurately. RBFNN model achieved better accuracy than the MLP-NN model. Moreover, the forecasting accuracy of the IDNN model was better than that of static network during both training and testing stages, which proves a consistent level of accuracy with seen and unseen data.

  8. A model of traffic signs recognition with convolutional neural network

    Science.gov (United States)

    Hu, Haihe; Li, Yujian; Zhang, Ting; Huo, Yi; Kuang, Wenqing

    2016-10-01

    In real traffic scenes, the quality of captured images are generally low due to some factors such as lighting conditions, and occlusion on. All of these factors are challengeable for automated recognition algorithms of traffic signs. Deep learning has provided a new way to solve this kind of problems recently. The deep network can automatically learn features from a large number of data samples and obtain an excellent recognition performance. We therefore approach this task of recognition of traffic signs as a general vision problem, with few assumptions related to road signs. We propose a model of Convolutional Neural Network (CNN) and apply the model to the task of traffic signs recognition. The proposed model adopts deep CNN as the supervised learning model, directly takes the collected traffic signs image as the input, alternates the convolutional layer and subsampling layer, and automatically extracts the features for the recognition of the traffic signs images. The proposed model includes an input layer, three convolutional layers, three subsampling layers, a fully-connected layer, and an output layer. To validate the proposed model, the experiments are implemented using the public dataset of China competition of fuzzy image processing. Experimental results show that the proposed model produces a recognition accuracy of 99.01 % on the training dataset, and yield a record of 92% on the preliminary contest within the fourth best.

  9. Neural correlates of single-vessel haemodynamic responses in vivo.

    Science.gov (United States)

    O'Herron, Philip; Chhatbar, Pratik Y; Levy, Manuel; Shen, Zhiming; Schramm, Adrien E; Lu, Zhongyang; Kara, Prakash

    2016-06-16

    Neural activation increases blood flow locally. This vascular signal is used by functional imaging techniques to infer the location and strength of neural activity. However, the precise spatial scale over which neural and vascular signals are correlated is unknown. Furthermore, the relative role of synaptic and spiking activity in driving haemodynamic signals is controversial. Previous studies recorded local field potentials as a measure of synaptic activity together with spiking activity and low-resolution haemodynamic imaging. Here we used two-photon microscopy to measure sensory-evoked responses of individual blood vessels (dilation, blood velocity) while imaging synaptic and spiking activity in the surrounding tissue using fluorescent glutamate and calcium sensors. In cat primary visual cortex, where neurons are clustered by their preference for stimulus orientation, we discovered new maps for excitatory synaptic activity, which were organized similarly to those for spiking activity but were less selective for stimulus orientation and direction. We generated tuning curves for individual vessel responses for the first time and found that parenchymal vessels in cortical layer 2/3 were orientation selective. Neighbouring penetrating arterioles had different orientation preferences. Pial surface arteries in cats, as well as surface arteries and penetrating arterioles in rat visual cortex (where orientation maps do not exist), responded to visual stimuli but had no orientation selectivity. We integrated synaptic or spiking responses around individual parenchymal vessels in cats and established that the vascular and neural responses had the same orientation preference. However, synaptic and spiking responses were more selective than vascular responses--vessels frequently responded robustly to stimuli that evoked little to no neural activity in the surrounding tissue. Thus, local neural and haemodynamic signals were partly decoupled. Together, these results indicate

  10. Models of Acetylcholine and Dopamine Signals Differentially Improve Neural Representations

    Science.gov (United States)

    Holca-Lamarre, Raphaël; Lücke, Jörg; Obermayer, Klaus

    2017-01-01

    Biological and artificial neural networks (ANNs) represent input signals as patterns of neural activity. In biology, neuromodulators can trigger important reorganizations of these neural representations. For instance, pairing a stimulus with the release of either acetylcholine (ACh) or dopamine (DA) evokes long lasting increases in the responses of neurons to the paired stimulus. The functional roles of ACh and DA in rearranging representations remain largely unknown. Here, we address this question using a Hebbian-learning neural network model. Our aim is both to gain a functional understanding of ACh and DA transmission in shaping biological representations and to explore neuromodulator-inspired learning rules for ANNs. We model the effects of ACh and DA on synaptic plasticity and confirm that stimuli coinciding with greater neuromodulator activation are over represented in the network. We then simulate the physiological release schedules of ACh and DA. We measure the impact of neuromodulator release on the network's representation and on its performance on a classification task. We find that ACh and DA trigger distinct changes in neural representations that both improve performance. The putative ACh signal redistributes neural preferences so that more neurons encode stimulus classes that are challenging for the network. The putative DA signal adapts synaptic weights so that they better match the classes of the task at hand. Our model thus offers a functional explanation for the effects of ACh and DA on cortical representations. Additionally, our learning algorithm yields performances comparable to those of state-of-the-art optimisation methods in multi-layer perceptrons while requiring weaker supervision signals and interacting with synaptically-local weight updates. PMID:28690509

  11. Keypoint Density-Based Region Proposal for Fine-Grained Object Detection and Classification Using Regions with Convolutional Neural Network Features

    Science.gov (United States)

    2015-12-15

    convolution, activation functions, and pooling. For a model trained on classes, the output from the classification layer comprises + 1...Keypoint Density-based Region Proposal for Fine-Grained Object Detection and Classification using Regions with Convolutional Neural Network...Convolutional Neural Networks (CNNs) enable them to outperform conventional techniques on standard object detection and classification tasks, their

  12. Morphogenetic movements in the neural plate and neural tube: mouse.

    Science.gov (United States)

    Massarwa, R'ada; Ray, Heather J; Niswander, Lee

    2014-01-01

    The neural tube (NT), the embryonic precursor of the vertebrate brain and spinal cord, is generated by a complex and highly dynamic morphological process. In mammals, the initially flat neural plate bends and lifts bilaterally to generate the neural folds followed by fusion of the folds at the midline during the process of neural tube closure (NTC). Failures in any step of this process can lead to neural tube defects (NTDs), a common class of birth defects that occur in approximately 1 in 1000 live births. These severe birth abnormalities include spina bifida, a failure of closure at the spinal level; craniorachischisis, a failure of NTC along the entire body axis; and exencephaly, a failure of the cranial neural folds to close which leads to degeneration of the exposed brain tissue termed anencephaly. The mouse embryo presents excellent opportunities to explore the genetic basis of NTC in mammals; however, its in utero development has also presented great challenges in generating a deeper understanding of how gene function regulates the cell and tissue behaviors that drive this highly dynamic process. Recent technological advances are now allowing researchers to address these questions through visualization of NTC dynamics in the mouse embryo in real time, thus offering new insights into the morphogenesis of mammalian NTC.

  13. Determining the amount of anesthetic medicine to be applied by using Elman's recurrent neural networks via resilient back propagation.

    Science.gov (United States)

    Güntürkün, Rüştü

    2010-08-01

    In this study, Elman recurrent neural networks have been defined by using Resilient Back Propagation in order to determine the depth of anesthesia in the continuation stage of the anesthesia and to estimate the amount of medicine to be applied at that moment. From 30 patients, 57 distinct EEG recordings have been collected prior to during anaesthesia of different levels. The applied artificial neural network is composed of three layers, namely the input layer, the middle layer and the output layer. The nonlinear activation function sigmoid (sigmoid function) has been used in the hidden layer and the output layer. Prediction has been made by means of ANN. Training and testing the ANN have been used previous anaesthesia amount, total power/normal power and total power/previous. The system has been able to correctly purposeful responses in average accuracy of 95% of the cases. This method is also computationally fast and acceptable real-time clinical performance has been obtained.

  14. Complex-Valued Neural Networks

    CERN Document Server

    Hirose, Akira

    2012-01-01

    This book is the second enlarged and revised edition of the first successful monograph on complex-valued neural networks (CVNNs) published in 2006, which lends itself to graduate and undergraduate courses in electrical engineering, informatics, control engineering, mechanics, robotics, bioengineering, and other relevant fields. In the second edition the recent trends in CVNNs research are included, resulting in e.g. almost a doubled number of references. The parametron invented in 1954 is also referred to with discussion on analogy and disparity. Also various additional arguments on the advantages of the complex-valued neural networks enhancing the difference to real-valued neural networks are given in various sections. The book is useful for those beginning their studies, for instance, in adaptive signal processing for highly functional sensing and imaging, control in unknown and changing environment, robotics inspired by human neural systems, and brain-like information processing, as well as interdisciplina...

  15. Artificial intelligence: Deep neural reasoning

    Science.gov (United States)

    Jaeger, Herbert

    2016-10-01

    The human brain can solve highly abstract reasoning problems using a neural network that is entirely physical. The underlying mechanisms are only partially understood, but an artificial network provides valuable insight. See Article p.471

  16. Logic Mining Using Neural Networks

    CERN Document Server

    Sathasivam, Saratha

    2008-01-01

    Knowledge could be gained from experts, specialists in the area of interest, or it can be gained by induction from sets of data. Automatic induction of knowledge from data sets, usually stored in large databases, is called data mining. Data mining methods are important in the management of complex systems. There are many technologies available to data mining practitioners, including Artificial Neural Networks, Regression, and Decision Trees. Neural networks have been successfully applied in wide range of supervised and unsupervised learning applications. Neural network methods are not commonly used for data mining tasks, because they often produce incomprehensible models, and require long training times. One way in which the collective properties of a neural network may be used to implement a computational task is by way of the concept of energy minimization. The Hopfield network is well-known example of such an approach. The Hopfield network is useful as content addressable memory or an analog computer for s...

  17. Neural correlates of consciousness reconsidered.

    Science.gov (United States)

    Neisser, Joseph

    2012-06-01

    It is widely accepted among philosophers that neuroscientists are conducting a search for the neural correlates of consciousness, or NCC. Chalmers (2000) conceptualized this research program as the attempt to correlate the contents of conscious experience with the contents of representations in specific neural populations. A notable claim on behalf of this interpretation is that the neutral language of "correlates" frees us from philosophical disputes over the mind/body relation, allowing the science to move independently. But the experimental paradigms and explanatory canons of neuroscience are not neutral about the mechanical relation between consciousness and the brain. I argue that NCC research is best characterized as an attempt to locate a causally relevant neural mechanism and not as an effort to identify a discrete neural representation, the content of which correlates with some actual experience. It might be said that the first C in "NCC" should stand for "causes" rather than "correlates."

  18. Neural Networks in Control Applications

    DEFF Research Database (Denmark)

    Sørensen, O.

    examined, and it appears that considering 'normal' neural network models with, say, 500 samples, the problem of over-fitting is neglible, and therefore it is not taken into consideration afterwards. Numerous model types, often met in control applications, are implemented as neural network models....... - Control concepts including parameter estimation - Control concepts including inverse modelling - Control concepts including optimal control For each of the three groups, different control concepts and specific training methods are detailed described.Further, all control concepts are tested on the same......The intention of this report is to make a systematic examination of the possibilities of applying neural networks in those technical areas, which are familiar to a control engineer. In other words, the potential of neural networks in control applications is given higher priority than a detailed...

  19. Neural communication in posttraumatic growth.

    Science.gov (United States)

    Anders, Samantha L; Peterson, Carly K; James, Lisa M; Engdahl, Brian; Leuthold, Arthur C; Georgopoulos, Apostolos P

    2015-07-01

    Posttraumatic growth (PTG), or positive psychological changes following exposure to traumatic events, is commonly reported among trauma survivors. In the present study, we examined neural correlates of PTG in 106 veterans with PTSD and 193 veteran controls using task-free magnetoencephalography (MEG), diagnostic interviews and measures of PTG, and traumatic event exposure. Global synchronous neural interactions (SNIs) were significantly modulated downward with increasing PTG scores in controls (p = .005), but not in veterans with PTSD (p = .601). This effect was primarily characterized by negative slopes in local neural networks, was strongest in the medial prefrontal cortex, and was much stronger and more extensive in the control than the PTSD group. The present study complements previous research highlighting the role of neural adaptation in healthy functioning.

  20. Neural components of altruistic punishment

    Directory of Open Access Journals (Sweden)

    Emily eDu

    2015-02-01

    Full Text Available Altruistic punishment, which occurs when an individual incurs a cost to punish in response to unfairness or a norm violation, may play a role in perpetuating cooperation. The neural correlates underlying costly punishment have only recently begun to be explored. Here we review the current state of research on the neural basis of altruism from the perspectives of costly punishment, emphasizing the importance of characterizing elementary neural processes underlying a decision to punish. In particular, we emphasize three cognitive processes that contribute to the decision to altruistically punish in most scenarios: inequity aversion, cost-benefit calculation, and social reference frame to distinguish self from others. Overall, we argue for the importance of understanding the neural correlates of altruistic punishment with respect to the core computations necessary to achieve a decision to punish.

  1. Modular, Hierarchical Learning By Artificial Neural Networks

    Science.gov (United States)

    Baldi, Pierre F.; Toomarian, Nikzad

    1996-01-01

    Modular and hierarchical approach to supervised learning by artificial neural networks leads to neural networks more structured than neural networks in which all neurons fully interconnected. These networks utilize general feedforward flow of information and sparse recurrent connections to achieve dynamical effects. The modular organization, sparsity of modular units and connections, and fact that learning is much more circumscribed are all attractive features for designing neural-network hardware. Learning streamlined by imitating some aspects of biological neural networks.

  2. A Study on Associative Neural Memories

    OpenAIRE

    B.D.C.N.Prasad; P. E. S. N. Krishna Prasad; Sagar Yeruva; P Sita Rama Murty

    2011-01-01

    Memory plays a major role in Artificial Neural Networks. Without memory, Neural Network can not be learned itself. One of the primary concepts of memory in neural networks is Associative neural memories. A survey has been made on associative neural memories such as Simple associative memories (SAM), Dynamic associative memories (DAM), Bidirectional Associative memories (BAM), Hopfield memories, Context Sensitive Auto-associative memories (CSAM) and so on. These memories can be applied in vari...

  3. Neural Network Communications Signal Processing

    Science.gov (United States)

    1994-08-01

    Technical Information Report for the Neural Network Communications Signal Processing Program, CDRL A003, 31 March 1993. Software Development Plan for...track changing jamming conditions to provide the decoder with the best log- likelihood ratio metrics at a given time. As part of our development plan we...Artificial Neural Networks (ICANN-91) Volume 2, June 24-28, 1991, pp. 1677-1680. Kohonen, Teuvo, Raivio, Kimmo, Simula, Oli, Venta , 011i, Henriksson

  4. Serotonin, neural markers, and memory

    OpenAIRE

    Alfredo eMeneses

    2015-01-01

    Diverse neuropsychiatric disorders present dysfunctional memory and no effective treatment exits for them; likely as result of the absence of neural markers associated to memory. Neurotransmitter systems and signaling pathways have been implicated in memory and dysfunctional memory; however, their role is poorly understood. Hence, neural markers and cerebral functions and dysfunctions are revised. To our knowledge no previous systematic works have been published addressing these issues. The i...

  5. What are artificial neural networks?

    DEFF Research Database (Denmark)

    Krogh, Anders

    2008-01-01

    Artificial neural networks have been applied to problems ranging from speech recognition to prediction of protein secondary structure, classification of cancers and gene prediction. How do they work and what might they be good for? Udgivelsesdato: 2008-Feb......Artificial neural networks have been applied to problems ranging from speech recognition to prediction of protein secondary structure, classification of cancers and gene prediction. How do they work and what might they be good for? Udgivelsesdato: 2008-Feb...

  6. ON THE ORDER OF APPROXIMATION BY PERIODIC NEURAL NETWORKS BASED ON SCATTERED NODES%基于散乱阈值结点的周期神经网络的逼近阶

    Institute of Scientific and Technical Information of China (English)

    周观珍

    2005-01-01

    The relationship between the order of approximation by neural network based on scattered threshold value nodes and the neurons involved in a single hidden layer is investigated.The results obtained show that the degree of approximation by the periodic neural network with one hidden layer and scattered threshold value nodes is increased with the increase of the number of neurons hid in hidden layer and the smoothness of excitation function.

  7. Random neural Q-learning for obstacle avoidance of a mobile robot in unknown environments

    Directory of Open Access Journals (Sweden)

    Jing Yang

    2016-07-01

    Full Text Available The article presents a random neural Q-learning strategy for the obstacle avoidance problem of an autonomous mobile robot in unknown environments. In the proposed strategy, two independent modules, namely, avoidance without considering the target and goal-seeking without considering obstacles, are first trained using the proposed random neural Q-learning algorithm to obtain their best control policies. Then, the two trained modules are combined based on a switching function to realize the obstacle avoidance in unknown environments. For the proposed random neural Q-learning algorithm, a single-hidden layer feedforward network is used to approximate the Q-function to estimate the Q-value. The parameters of the single-hidden layer feedforward network are modified using the recently proposed neural algorithm named the online sequential version of extreme learning machine, where the parameters of the hidden nodes are assigned randomly and the sample data can come one by one. However, different from the original online sequential version of extreme learning machine algorithm, the initial output weights are estimated subjected to quadratic inequality constraint to improve the convergence speed. Finally, the simulation results demonstrate that the proposed random neural Q-learning strategy can successfully solve the obstacle avoidance problem. Also, the higher learning efficiency and better generalization ability are achieved by the proposed random neural Q-learning algorithm compared with the Q-learning based on the back-propagation method.

  8. Electrochemical layer-by-layer approach to fabricate mechanically stable platinum black microelectrodes using a mussel-inspired polydopamine adhesive

    Science.gov (United States)

    Kim, Raeyoung; Nam, Yoonkey

    2015-04-01

    Objective. Platinum black (PtBK) has long been used for microelectrode fabrication owing to its high recording performance of neural signals. The porous structure of PtBK enlarges the surface area and lowers the impedance, which results in background noise reduction. However, the brittleness of PtBK has been a problem in practice. In this work, we report mechanically stable PtBK microelectrodes using a bioinspired adhesive film, polydopamine (pDA), while maintaining the low impedance of PtBK. Approach. The pDA layer was incorporated into the PtBK structure through electrochemical layer-by-layer deposition. Varying the number of layers and the order of materials, multi-layered pDA-PtBK hybrids were fabricated and the electrical properties, both impedance and charge injection limit, were evaluated. Main results. Multilayered pDA-PtBK hybrids had electrical impedances as low as PtBK controls and charge injection limit twice larger than controls. For the 30 min-ultrasonication agitation test, impedance levels rarely changed for some of the pDA-PtBK hybrids indicating that the pDA improved the mechanical property of the PtBK structures. The pDA-PtBK hybrid microelectrodes readily recorded neural signals of cultured hippocampal neurons, where background noise levels and signal-to-noise were 2.43 ∼ 3.23 μVrms and 28.4 ∼ 69.1, respectively. Significance. The developed pDA-PtBK hybrid microelectrodes are expected to be applicable to neural sensors for neural prosthetic studies.

  9. APPLICATION OF NEURAL NETWORK ALGORITHMS FOR BPM LINEARIZATION

    Energy Technology Data Exchange (ETDEWEB)

    Musson, John C. [JLAB; Seaton, Chad [JLAB; Spata, Mike F. [JLAB; Yan, Jianxun [JLAB

    2012-11-01

    Stripline BPM sensors contain inherent non-linearities, as a result of field distortions from the pickup elements. Many methods have been devised to facilitate corrections, often employing polynomial fitting. The cost of computation makes real-time correction difficult, particulalry when integer math is utilized. The application of neural-network technology, particularly the multi-layer perceptron algorithm, is proposed as an efficient alternative for electrode linearization. A process of supervised learning is initially used to determine the weighting coefficients, which are subsequently applied to the incoming electrode data. A non-linear layer, known as an activation layer, is responsible for the removal of saturation effects. Implementation of a perceptron in an FPGA-based software-defined radio (SDR) is presented, along with performance comparisons. In addition, efficient calculation of the sigmoidal activation function via the CORDIC algorithm is presented.

  10. Flexibility of neural stem cells

    Directory of Open Access Journals (Sweden)

    Eumorphia eRemboutsika

    2011-04-01

    Full Text Available Embryonic cortical neural stem cells are self-renewing progenitors that can differentiate into neurons and glia. We generated neurospheres from the developing cerebral cortex using a mouse genetic model that allows for lineage selection and found that the self-renewing neural stem cells are restricted to Sox2 expressing cells. Under normal conditions, embryonic cortical neurospheres are heterogeneous with regard to Sox2 expression and contain astrocytes, neural stem cells and neural progenitor cells sufficiently plastic to give rise to neural crest cells when transplanted into the hindbrain of E1.5 chick and E8 mouse embryos. However, when neurospheres are maintained under lineage selection, such that all cells express Sox2, neural stem cells maintain their Pax6+ cortical radial glia identity and exhibit a more restricted fate in vitro and after transplantation. These data demonstrate that Sox2 preserves the cortical identity and regulates the plasticity of self-renewing Pax6+ radial glia cells.

  11. Building footprint extraction from digital surface models using neural networks

    Science.gov (United States)

    Davydova, Ksenia; Cui, Shiyong; Reinartz, Peter

    2016-10-01

    Two-dimensional building footprints are a basis for many applications: from cartography to three-dimensional building models generation. Although, many methodologies have been proposed for building footprint extraction, this topic remains an open research area. Neural networks are able to model the complex relationships between the multivariate input vector and the target vector. Based on these abilities we propose a methodology using neural networks and Markov Random Fields (MRF) for automatic building footprint extraction from normalized Digital Surface Model (nDSM) and satellite images within urban areas. The proposed approach has mainly two steps. In the first step, the unary terms are learned for the MRF energy function by a four-layer neural network. The neural network is learned on a large set of patches consisting of both nDSM and Normalized Difference Vegetation Index (NDVI). Then prediction is performed to calculate the unary terms that are used in the MRF. In the second step, the energy function is minimized using a maxflow algorithm, which leads to a binary building mask. The building extraction results are compared with available ground truth. The comparison illustrates the efficiency of the proposed algorithm which can extract approximately 80% of buildings from nDSM with high accuracy.

  12. POPFNN: A Pseudo Outer-product Based Fuzzy Neural Network.

    Science.gov (United States)

    Quek, C; Zhou, R W.

    1996-12-01

    A novel fuzzy neural network, called the pseudo outer-product based fuzzy neural network (POPFNN), is proposed in this paper. The functions performed by each layer in the proposed POPFNN strictly correspond to the inference steps in the truth value restriction method in fuzzy logic [[Mantaras (1990)] Approximate reasoning models, Ellis Horwood]. This correspondence gives it a strong theoretical basis. Similar to most of the existing fuzzy neural networks, the proposed POPFNN uses a self-organizing algorithm ([Kohonen, 1988], Self-organization and associative memories, Springer) to learn and initialize the membership functions of the input and output variables from a set of training data. However, instead of employing the popularly used competitive learning [[Kosko (1990)] IEEE Trans. Neural Networks, 3(5), 801], this paper proposes a novel pseudo outer-product (POP) learning algorithm to identify the fuzzy rules that are supported by the training data. The proposed POP learning algorithm is fast, reliable, and highly intuitive. Extensive experimental results and comparisons are presented at the end of the paper for discussion. Copyright 1996 Elsevier Science Ltd.

  13. Face recognition: a convolutional neural-network approach.

    Science.gov (United States)

    Lawrence, S; Giles, C L; Tsoi, A C; Back, A D

    1997-01-01

    We present a hybrid neural-network for human face recognition which compares favourably with other methods. The system combines local image sampling, a self-organizing map (SOM) neural network, and a convolutional neural network. The SOM provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the SOM, and a multilayer perceptron (MLP) in place of the convolutional network for comparison. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze the computational complexity and discuss how new classes could be added to the trained recognizer.

  14. Multi-channel micro neural probe fabricated with SOI

    Institute of Scientific and Technical Information of China (English)

    PEI WeiHua; ZHU Lin; WANG ShuJing; GUO Kai; TANG Jun; ZHANG Xu; LU Lin; GAO ShangKai; CHEN HongDa

    2009-01-01

    Silicon-on-insulator (SOI) substrate is widely used in micro-electro-mechanical systems (MEMS). With the buried oxide layer of SOl acting as an etching stop, silicon based micro neural probe can be fabri-cated with improved uniformity and manufacturability. A seven-record-site neural probe was formed by inductive-coupled plasma (ICP) dry etching of an SOl substrate. The thickness of the probe is 15 μm.The shaft of the probe has dimensions of 3 mmx100 μmx15 μm with typical area of the record site of 78.5 μm2. The impedance of the record site was measured in-vitro. The typical impedance characteris-tics of the record sites are around 2 MΩ at 1 kHz. The performance of the neural probe in-vivo was tested on anesthetic rat. The recorded neural spike was typically around 140 μV. Spike from individual site could exceed 700 μV. The average signal noise ratio was 7 or more.

  15. Multi-channel micro neural probe fabricated with SOI

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    Silicon-on-insulator(SOI) substrate is widely used in micro-electro-mechanical systems(MEMS).With the buried oxide layer of SOI acting as an etching stop,silicon based micro neural probe can be fabri-cated with improved uniformity and manufacturability.A seven-record-site neural probe was formed by inductive-coupled plasma(ICP) dry etching of an SOI substrate.The thickness of the probe is 15 μm.The shaft of the probe has dimensions of 3 mm×100 μm×15 μm with typical area of the record site of 78.5 μm2.The impedance of the record site was measured in-vitro.The typical impedance characteris-tics of the record sites are around 2 MΩ at 1 kHz.The performance of the neural probe in-vivo was tested on anesthetic rat.The recorded neural spike was typically around 140 μV.Spike from individual site could exceed 700 μV.The average signal noise ratio was 7 or more.

  16. Hybrid multiobjective evolutionary design for artificial neural networks.

    Science.gov (United States)

    Goh, Chi-Keong; Teoh, Eu-Jin; Tan, Kay Chen

    2008-09-01

    Evolutionary algorithms are a class of stochastic search methods that attempts to emulate the biological process of evolution, incorporating concepts of selection, reproduction, and mutation. In recent years, there has been an increase in the use of evolutionary approaches in the training of artificial neural networks (ANNs). While evolutionary techniques for neural networks have shown to provide superior performance over conventional training approaches, the simultaneous optimization of network performance and architecture will almost always result in a slow training process due to the added algorithmic complexity. In this paper, we present a geometrical measure based on the singular value decomposition (SVD) to estimate the necessary number of neurons to be used in training a single-hidden-layer feedforward neural network (SLFN). In addition, we develop a new hybrid multiobjective evolutionary approach that includes the features of a variable length representation that allow for easy adaptation of neural networks structures, an architectural recombination procedure based on the geometrical measure that adapts the number of necessary hidden neurons and facilitates the exchange of neuronal information between candidate designs, and a microhybrid genetic algorithm ( microHGA) with an adaptive local search intensity scheme for local fine-tuning. In addition, the performances of well-known algorithms as well as the effectiveness and contributions of the proposed approach are analyzed and validated through a variety of data set types.

  17. UNMANNED AIR VEHICLE STABILIZATION BASED ON NEURAL NETWORK REGULATOR

    Directory of Open Access Journals (Sweden)

    S. S. Andropov

    2016-09-01

    Full Text Available A problem of stabilizing for the multirotor unmanned aerial vehicle in an environment with external disturbances is researched. A classic proportional-integral-derivative controller is analyzed, its flaws are outlined: inability to respond to changing of external conditions and the need for manual adjustment of coefficients. The paper presents an adaptive adjustment method for coefficients of the proportional-integral-derivative controller based on neural networks. A neural network structure, its input and output data are described. Neural networks with three layers are used to create an adaptive stabilization system for the multirotor unmanned aerial vehicle. Training of the networks is done with the back propagation method. Each neural network produces regulator coefficients for each angle of stabilization as its output. A method for network training is explained. Several graphs of transition process on different stages of learning, including processes with external disturbances, are presented. It is shown that the system meets stabilization requirements with sufficient number of iterations. Described adjustment method for coefficients can be used in remote control of unmanned aerial vehicles, operating in the changing environment.

  18. Constructing general partial differential equations using polynomial and neural networks.

    Science.gov (United States)

    Zjavka, Ladislav; Pedrycz, Witold

    2016-01-01

    Sum fraction terms can approximate multi-variable functions on the basis of discrete observations, replacing a partial differential equation definition with polynomial elementary data relation descriptions. Artificial neural networks commonly transform the weighted sum of inputs to describe overall similarity relationships of trained and new testing input patterns. Differential polynomial neural networks form a new class of neural networks, which construct and solve an unknown general partial differential equation of a function of interest with selected substitution relative terms using non-linear multi-variable composite polynomials. The layers of the network generate simple and composite relative substitution terms whose convergent series combinations can describe partial dependent derivative changes of the input variables. This regression is based on trained generalized partial derivative data relations, decomposed into a multi-layer polynomial network structure. The sigmoidal function, commonly used as a nonlinear activation of artificial neurons, may transform some polynomial items together with the parameters with the aim to improve the polynomial derivative term series ability to approximate complicated periodic functions, as simple low order polynomials are not able to fully make up for the complete cycles. The similarity analysis facilitates substitutions for differential equations or can form dimensional units from data samples to describe real-world problems.

  19. Biocompatibility of a quad-shank neural probe

    Science.gov (United States)

    Tyson, Joel; Tran, Minhquan; Slaughter, Gymama

    2017-10-01

    Multichannel, flexible neural probes have been fabricated using standard CMOS techniques. The neural probe consists of four shanks with 16 recording sites each of approximately 290 μm2. The recording sites are created using gold rectangular pyramidal electrodes sandwiched between two polyimide dielectric layers. Windows in the first polyimide layer expose the electrode sites and bonding pads. The bonding pads and interconnect wires at the topmost section of the probe are soldered to tungsten wire followed by encapsulation with epoxy to protect the interconnections from contact with phosphate buffered saline solution. The electrode test impedance values at 1 kHz are on average 135 kΩ. Multi-walled carbon nanotubes (MWCNTs) were deposited on electrode sites resulting in a reduction of impedance at 1 kHz to 6.89 kΩ on average. Moreover, the cell viability and proliferation of the PC12 cells on the surface of the probe was investigated by trypan blue exclusion assay to evaluate biocompatibility of the probe material. The PC12 cells attached and grew on the surfaces of the probe with no significant effect on the cells' morphology and viability. The polyimide probe displayed a good cell viability and proliferation, making the polyimide attractive for potential candidate as probe materials in the fabrication of neural probes.

  20. Neural correlation is stimulus modulated by feedforward inhibitory circuitry.

    Science.gov (United States)

    Middleton, Jason W; Omar, Cyrus; Doiron, Brent; Simons, Daniel J

    2012-01-11

    Correlated variability of neural spiking activity has important consequences for signal processing. How incoming sensory signals shape correlations of population responses remains unclear. Cross-correlations between spiking of different neurons may be particularly consequential in sparsely firing neural populations such as those found in layer 2/3 of sensory cortex. In rat whisker barrel cortex, we found that pairs of excitatory layer 2/3 neurons exhibit similarly low levels of spike count correlation during both spontaneous and sensory-evoked states. The spontaneous activity of excitatory-inhibitory neuron pairs is positively correlated, while sensory stimuli actively decorrelate joint responses. Computational modeling shows how threshold nonlinearities and local inhibition form the basis of a general decorrelating mechanism. We show that inhibitory population activity maintains low correlations in excitatory populations, especially during periods of sensory-evoked coactivation. The role of feedforward inhibition has been previously described in the context of trial-averaged phenomena. Our findings reveal a novel role for inhibition to shape correlations of neural variability and thereby prevent excessive correlations in the face of feedforward sensory-evoked activation.

  1. Intrinsic Plasticity for Natural Competition in Koniocortex-Like Neural Networks.

    Science.gov (United States)

    Peláez, Francisco Javier Ropero; Aguiar-Furucho, Mariana Antonia; Andina, Diego

    2016-08-01

    In this paper, we use the neural property known as intrinsic plasticity to develop neural network models that resemble the koniocortex, the fourth layer of sensory cortices. These models evolved from a very basic two-layered neural network to a complex associative koniocortex network. In the initial network, intrinsic and synaptic plasticity govern the shifting of the activation function, and the modification of synaptic weights, respectively. In this first version, competition is forced, so that the most activated neuron is arbitrarily set to one and the others to zero, while in the second, competition occurs naturally due to inhibition between second layer neurons. In the third version of the network, whose architecture is similar to the koniocortex, competition also occurs naturally owing to the interplay between inhibitory interneurons and synaptic and intrinsic plasticity. A more complex associative neural network was developed based on this basic koniocortex-like neural network, capable of dealing with incomplete patterns and ideally suited to operating similarly to a learning vector quantization network. We also discuss the biological plausibility of the networks and their role in a more complex thalamocortical model.

  2. Kinetic study of rigid polyurethane foams thermal decomposition by artificial neural network

    OpenAIRE

    Ferreira, Bárbara D. L.; Silva, Virgínia R.; Yoshida, Maria Irene; Sebastião,Rita C. O.

    2016-01-01

    Kinetic models of solid thermal decomposition are traditionally used for individual fit of isothermal decomposition experimental data. However, this methodology can provide unacceptable errors in some cases. To solve this problem, a neural network (MLP) was developed and adopted in this work. The implemented algorithm uses the rate constants as predetermined weights between the input and intermediate layer and kinetic models as activation functions of neurons in the hidden layer. The contribu...

  3. DeepNet: An Ultrafast Neural Learning Code for Seismic Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Barhen, J.; Protopopescu, V.; Reister, D.

    1999-07-10

    A feed-forward multilayer neural net is trained to learn the correspondence between seismic data and well logs. The introduction of a virtual input layer, connected to the nominal input layer through a special nonlinear transfer function, enables ultrafast (single iteration), near-optimal training of the net using numerical algebraic techniques. A unique computer code, named DeepNet, has been developed, that has achieved, in actual field demonstrations, results unattainable to date with industry standard tools.

  4. Risk Management in Adventure Programs with Special Populations: Two Hidden Dangers.

    Science.gov (United States)

    Stich, Thomas F.; Gaylor, Michael S.

    1984-01-01

    Presents practical information about psychiatric medications and psychological emergencies to assist staff working in outdoor settings with mental health patients. Discusses the potential hazards and side effects of psychotropic drugs. Provides step-by-step guidelines for distinguishing between medical and psychological problems, assessing…

  5. Recurrent neural collective classification.

    Science.gov (United States)

    Monner, Derek D; Reggia, James A

    2013-12-01

    With the recent surge in availability of data sets containing not only individual attributes but also relationships, classification techniques that take advantage of predictive relationship information have gained in popularity. The most popular existing collective classification techniques have a number of limitations-some of them generate arbitrary and potentially lossy summaries of the relationship data, whereas others ignore directionality and strength of relationships. Popular existing techniques make use of only direct neighbor relationships when classifying a given entity, ignoring potentially useful information contained in expanded neighborhoods of radius greater than one. We present a new technique that we call recurrent neural collective classification (RNCC), which avoids arbitrary summarization, uses information about relationship directionality and strength, and through recursive encoding, learns to leverage larger relational neighborhoods around each entity. Experiments with synthetic data sets show that RNCC can make effective use of relationship data for both direct and expanded neighborhoods. Further experiments demonstrate that our technique outperforms previously published results of several collective classification methods on a number of real-world data sets.

  6. Understanding Neural Networks for Machine Learning using Microsoft Neural Network Algorithm

    National Research Council Canada - National Science Library

    Nagesh Ramprasad

    2016-01-01

    .... In this research, focus is on the Microsoft Neural System Algorithm. The Microsoft Neural System Algorithm is a simple implementation of the adaptable and popular neural networks that are used in the machine learning...

  7. Isolation and culture of neural crest cells from embryonic murine neural tube.

    Science.gov (United States)

    Pfaltzgraff, Elise R; Mundell, Nathan A; Labosky, Patricia A

    2012-06-02

    The embryonic neural crest (NC) is a multipotent progenitor population that originates at the dorsal aspect of the neural tube, undergoes an epithelial to mesenchymal transition (EMT) and migrates throughout the embryo, giving rise to diverse cell types. NC also has the unique ability to influence the differentiation and maturation of target organs. When explanted in vitro, NC progenitors undergo self-renewal, migrate and differentiate into a variety of tissue types including neurons, glia, smooth muscle cells, cartilage and bone. NC multipotency was first described from explants of the avian neural tube. In vitro isolation of NC cells facilitates the study of NC dynamics including proliferation, migration, and multipotency. Further work in the avian and rat systems demonstrated that explanted NC cells retain their NC potential when transplanted back into the embryo. Because these inherent cellular properties are preserved in explanted NC progenitors, the neural tube explant assay provides an attractive option for studying the NC in vitro. To attain a better understanding of the mammalian NC, many methods have been employed to isolate NC populations. NC-derived progenitors can be cultured from post-migratory locations in both the embryo and adult to study the dynamics of post-migratory NC progenitors, however isolation of NC progenitors as they emigrate from the neural tube provides optimal preservation of NC cell potential and migratory properties. Some protocols employ fluorescence activated cell sorting (FACS) to isolate a NC population enriched for particular progenitors. However, when starting with early stage embryos, cell numbers adequate for analyses are difficult to obtain with FACS, complicating the isolation of early NC populations from individual embryos. Here, we describe an approach that does not rely on FACS and results in an approximately 96% pure NC population based on a Wnt1-Cre activated lineage reporter. The method presented here is adapted from

  8. Graphene microelectrode arrays for neural activity detection.

    Science.gov (United States)

    Du, Xiaowei; Wu, Lei; Cheng, Ji; Huang, Shanluo; Cai, Qi; Jin, Qinghui; Zhao, Jianlong

    2015-09-01

    We demonstrate a method to fabricate graphene microelectrode arrays (MEAs) using a simple and inexpensive method to solve the problem of opaque electrode positions in traditional MEAs, while keeping good biocompatibility. To study the interface differences between graphene-electrolyte and gold-electrolyte, graphene and gold electrodes with a large area were fabricated. According to the simulation results of electrochemical impedances, the gold-electrolyte interface can be described as a classical double-layer structure, while the graphene-electrolyte interface can be explained by a modified double-layer theory. Furthermore, using graphene MEAs, we detected the neural activities of neurons dissociated from Wistar rats (embryonic day 18). The signal-to-noise ratio of the detected signal was 10.31 ± 1.2, which is comparable to those of MEAs made with other materials. The long-term stability of the MEAs is demonstrated by comparing differences in Bode diagrams taken before and after cell culturing.

  9. Do Convolutional Neural Networks Learn Class Hierarchy?

    Science.gov (United States)

    Alsallakh, Bilal; Jourabloo, Amin; Ye, Mao; Liu, Xiaoming; Ren, Liu

    2017-08-29

    Convolutional Neural Networks (CNNs) currently achieve state-of-the-art accuracy in image classification. With a growing number of classes, the accuracy usually drops as the possibilities of confusion increase. Interestingly, the class confusion patterns follow a hierarchical structure over the classes. We present visual-analytics methods to reveal and analyze this hierarchy of similar classes in relation with CNN-internal data. We found that this hierarchy not only dictates the confusion patterns between the classes, it furthermore dictates the learning behavior of CNNs. In particular, the early layers in these networks develop feature detectors that can separate high-level groups of classes quite well, even after a few training epochs. In contrast, the latter layers require substantially more epochs to develop specialized feature detectors that can separate individual classes. We demonstrate how these insights are key to significant improvement in accuracy by designing hierarchy-aware CNNs that accelerate model convergence and alleviate overfitting. We further demonstrate how our methods help in identifying various quality issues in the training data.

  10. Neural Dynamics and Information Representation in Microcircuits of Motor Cortex

    Directory of Open Access Journals (Sweden)

    Yasuhiro eTsubo

    2013-05-01

    Full Text Available The brain has to analyze and respond to external events that can change rapidly from time to time, suggesting that information processing by the brain may be essentially dynamic rather than static. The dynamical features of neural computation are of significant importance in motor cortex that governs the process of movement generation and learning. In this paper, we discuss these features based primarily on our recent findings on neural dynamics and information coding in the microcircuit of rat motor cortex. In fact, cortical neurons show a variety of dynamical behavior from rhythmic activity in various frequency bands to highly irregular spike firing. Of particular interest are the similarity and dissimilarity of the neuronal response properties in different layers of motor cortex. By conducting electrophysiological recordings in slice preparation, we report the phase response curves of neurons in different cortical layers to demonstrate their layer-dependent synchronization properties. We then study how motor cortex recruits task-related neurons in different layers for voluntary arm movements by simultaneous juxtacellular and multiunit recordings from behaving rats. The results suggest an interesting difference in the spectrum of functional activity between the superficial and deep layers. Furthermore, the task-related activities recorded from various layers exhibited power law distributions of inter-spike intervals (ISIs, in contrast to a general belief that ISIs obey Poisson or Gamma distributions in cortical neurons. We present a theoretical argument that this power law of in vivo neurons may represent the maximization of the entropy of firing rate with limited energy consumption of spike generation. Though further studies are required to fully clarify the functional implications of this coding principle, it may shed new light on information representations by neurons and circuits in motor cortex.

  11. Piezoelectric Resonator with Two Layers

    Science.gov (United States)

    Stephanou, Philip J. (Inventor); Black, Justin P. (Inventor)

    2013-01-01

    A piezoelectric resonator device includes: a top electrode layer with a patterned structure, a top piezoelectric layer adjacent to the top layer, a middle metal layer adjacent to the top piezoelectric layer opposite the top layer, a bottom piezoelectric layer adjacent to the middle layer opposite the top piezoelectric layer, and a bottom electrode layer with a patterned structure and adjacent to the bottom piezoelectric layer opposite the middle layer. The top layer includes a first plurality of electrodes inter-digitated with a second plurality of electrodes. A first one of the electrodes in the top layer and a first one of the electrodes in the bottom layer are coupled to a first contact, and a second one of the electrodes in the top layer and a second one of the electrodes in the bottom layer are coupled to a second contact.

  12. Layered Ensemble Architecture for Time Series Forecasting.

    Science.gov (United States)

    Rahman, Md Mustafizur; Islam, Md Monirul; Murase, Kazuyuki; Yao, Xin

    2016-01-01

    Time series forecasting (TSF) has been widely used in many application areas such as science, engineering, and finance. The phenomena generating time series are usually unknown and information available for forecasting is only limited to the past values of the series. It is, therefore, necessary to use an appropriate number of past values, termed lag, for forecasting. This paper proposes a layered ensemble architecture (LEA) for TSF problems. Our LEA consists of two layers, each of which uses an ensemble of multilayer perceptron (MLP) networks. While the first ensemble layer tries to find an appropriate lag, the second ensemble layer employs the obtained lag for forecasting. Unlike most previous work on TSF, the proposed architecture considers both accuracy and diversity of the individual networks in constructing an ensemble. LEA trains different networks in the ensemble by using different training sets with an aim of maintaining diversity among the networks. However, it uses the appropriate lag and combines the best trained networks to construct the ensemble. This indicates LEAs emphasis on accuracy of the networks. The proposed architecture has been tested extensively on time series data of neural network (NN)3 and NN5 competitions. It has also been tested on several standard benchmark time series data. In terms of forecasting accuracy, our experimental results have revealed clearly that LEA is better than other ensemble and nonensemble methods.

  13. On the implementation of frontier-to-root tree automata in recursive neural networks.

    Science.gov (United States)

    Gori, M; Küchler, A; Sperduti, A

    1999-01-01

    In this paper we explore the node complexity of recursive neural network implementations of frontier-to-root tree automata (FRA). Specifically, we show that an FRAO (Mealy version) with m states, l input-output labels, and maximum rank N can be implemented by a recursive neural network with O(radical(log l+log m)lm(N)/log l+N log m) units and four computational layers, i.e., without counting the input layer. A lower bound is derived which is tight when no restrictions are placed on the number of layers. Moreover, we present a construction with three computational layers having node complexity of O((log l + log m)radical lmN) and O((log l + log m) lmN) connections. A construction with two computational layers is given that implements any given FRAO with a node complexity of O(lmN) and O((log l+N log m)lmN) connections. As a corollary we also get a new upper bound for the implementation of finite-state automata (FSA) into recurrent neural networks with three computational layers.

  14. Building biomedical materials layer-by-layer

    Directory of Open Access Journals (Sweden)

    Paula T. Hammond

    2012-05-01

    Full Text Available In this materials perspective, the promise of water based layer-by-layer (LbL assembly as a means of generating drug-releasing surfaces for biomedical applications, from small molecule therapeutics to biologic drugs and nucleic acids, is examined. Specific advantages of the use of LbL assembly versus traditional polymeric blend encapsulation are discussed. Examples are provided to present potential new directions. Translational opportunities are discussed to examine the impact and potential for true biomedical translation using rapid assembly methods, and applications are discussed with high need and medical return.

  15. Identification of Complex Dynamical Systems with Neural Networks (2/2)

    CERN Document Server

    CERN. Geneva

    2016-01-01

    The identification and analysis of high dimensional nonlinear systems is obviously a challenging task. Neural networks have been proven to be universal approximators but this still leaves the identification task a hard one. To do it efficiently, we have to violate some of the rules of classical regression theory. Furthermore we should focus on the interpretation of the resulting model to overcome its black box character. First, we will discuss function approximation with 3 layer feedforward neural networks up to new developments in deep neural networks and deep learning. These nets are not only of interest in connection with image analysis but are a center point of the current artificial intelligence developments. Second, we will focus on the analysis of complex dynamical system in the form of state space models realized as recurrent neural networks. After the introduction of small open dynamical systems we will study dynamical systems on manifolds. Here manifold and dynamics have to be identified in parall...

  16. Identification of Complex Dynamical Systems with Neural Networks (1/2)

    CERN Document Server

    CERN. Geneva

    2016-01-01

    The identification and analysis of high dimensional nonlinear systems is obviously a challenging task. Neural networks have been proven to be universal approximators but this still leaves the identification task a hard one. To do it efficiently, we have to violate some of the rules of classical regression theory. Furthermore we should focus on the interpretation of the resulting model to overcome its black box character. First, we will discuss function approximation with 3 layer feedforward neural networks up to new developments in deep neural networks and deep learning. These nets are not only of interest in connection with image analysis but are a center point of the current artificial intelligence developments. Second, we will focus on the analysis of complex dynamical system in the form of state space models realized as recurrent neural networks. After the introduction of small open dynamical systems we will study dynamical systems on manifolds. Here manifold and dynamics have to be identified in parall...

  17. Forecasting Macroeconomic Variables using Neural Network Models and Three Automated Model Selection Techniques

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Teräsvirta, Timo

    In this paper we consider the forecasting performance of a well-defined class of flexible models, the so-called single hidden-layer feedforward neural network models. A major aim of our study is to find out whether they, due to their flexibility, are as useful tools in economic forecasting as some...... previous studies have indicated. When forecasting with neural network models one faces several problems, all of which influence the accuracy of the forecasts. First, neural networks are often hard to estimate due to their highly nonlinear structure. In fact, their parameters are not even globally...... on the linearisation idea: the Marginal Bridge Estimator and Autometrics. Second, one must decide whether forecasting should be carried out recursively or directly. Comparisons of these two methodss exist for linear models and here these comparisons are extended to neural networks. Finally, a nonlinear model...

  18. Transformation of Neural State Space Models into LFT Models for Robust Control Design

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Trangbæk, Klaus

    2000-01-01

    This paper considers the extraction of linear state space models and uncertainty models from neural networks trained as state estimators with direct application to robust control. A new method for writing a neural state space model in a linear fractional transformation form in a non-conservative ......This paper considers the extraction of linear state space models and uncertainty models from neural networks trained as state estimators with direct application to robust control. A new method for writing a neural state space model in a linear fractional transformation form in a non......-conservative way is proposed, and it is demonstrated how a standard robust control law can be designed for a system described by means of a multi layer perceptron....

  19. NONLINEAR MODELING AND CONTROLLING OF ARTIFICIAL MUSCLE SYSTEM USING NEURAL NETWORKS

    Institute of Scientific and Technical Information of China (English)

    Tian Sheping; Ding Guoqing; Yan Detian; Lin Liangming

    2004-01-01

    The pneumatic artificial muscles are widely used in the fields of medical robots,etc.Neural networks are applied to modeling and controlling of artificial muscle system.A single-joint artificial muscle test system is designed.The recursive prediction error (RPE) algorithm which yields faster convergence than back propagation (BP) algorithm is applied to train the neural networks.The realization of RPE algorithm is given.The difference of modeling of artificial muscles using neural networks with different input nodes and different hidden layer nodes is discussed.On this basis the nonlinear control scheme using neural networks for artificial muscle system has been introduced.The experimental results show that the nonlinear control scheme yields faster response and higher control accuracy than the traditional linear control scheme.

  20. Computationally efficient locally-recurrent neural networks for online signal processing

    CERN Document Server

    Hussain, A; Shim, I

    1999-01-01

    A general class of computationally efficient locally recurrent networks (CERN) is described for real-time adaptive signal processing. The structure of the CERN is based on linear-in-the- parameters single-hidden-layered feedforward neural networks such as the radial basis function (RBF) network, the Volterra neural network (VNN) and the functionally expanded neural network (FENN), adapted to employ local output feedback. The corresponding learning algorithms are derived and key structural and computational complexity comparisons are made between the CERN and conventional recurrent neural networks. Two case studies are performed involving the real- time adaptive nonlinear prediction of real-world chaotic, highly non- stationary laser time series and an actual speech signal, which show that a recurrent FENN based adaptive CERN predictor can significantly outperform the corresponding feedforward FENN and conventionally employed linear adaptive filtering models. (13 refs).

  1. Forecasting SPEI and SPI Drought Indices Using the Integrated Artificial Neural Networks.

    Science.gov (United States)

    Maca, Petr; Pech, Pavel

    2016-01-01

    The presented paper compares forecast of drought indices based on two different models of artificial neural networks. The first model is based on feedforward multilayer perceptron, sANN, and the second one is the integrated neural network model, hANN. The analyzed drought indices are the standardized precipitation index (SPI) and the standardized precipitation evaporation index (SPEI) and were derived for the period of 1948-2002 on two US catchments. The meteorological and hydrological data were obtained from MOPEX experiment. The training of both neural network models was made by the adaptive version of differential evolution, JADE. The comparison of models was based on six model performance measures. The results of drought indices forecast, explained by the values of four model performance indices, show that the integrated neural network model was superior to the feedforward multilayer perceptron with one hidden layer of neurons.

  2. Forecasting SPEI and SPI Drought Indices Using the Integrated Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Petr Maca

    2016-01-01

    Full Text Available The presented paper compares forecast of drought indices based on two different models of artificial neural networks. The first model is based on feedforward multilayer perceptron, sANN, and the second one is the integrated neural network model, hANN. The analyzed drought indices are the standardized precipitation index (SPI and the standardized precipitation evaporation index (SPEI and were derived for the period of 1948–2002 on two US catchments. The meteorological and hydrological data were obtained from MOPEX experiment. The training of both neural network models was made by the adaptive version of differential evolution, JADE. The comparison of models was based on six model performance measures. The results of drought indices forecast, explained by the values of four model performance indices, show that the integrated neural network model was superior to the feedforward multilayer perceptron with one hidden layer of neurons.

  3. Fault detection and classification in electrical power transmission system using artificial neural network.

    Science.gov (United States)

    Jamil, Majid; Sharma, Sanjeev Kumar; Singh, Rajveer

    2015-01-01

    This paper focuses on the detection and classification of the faults on electrical power transmission line using artificial neural networks. The three phase currents and voltages of one end are taken as inputs in the proposed scheme. The feed forward neural network along with back propagation algorithm has been employed for detection and classification of the fault for analysis of each of the three phases involved in the process. A detailed analysis with varying number of hidden layers has been performed to validate the choice of the neural network. The simulation results concluded that the present method based on the neural network is efficient in detecting and classifying the faults on transmission lines with satisfactory performances. The different faults are simulated with different parameters to check the versatility of the method. The proposed method can be extended to the Distribution network of the Power System. The various simulations and analysis of signals is done in the MATLAB(®) environment.

  4. Parsing recursive sentences with a connectionist model including a neural stack and synaptic gating.

    Science.gov (United States)

    Fedor, Anna; Ittzés, Péter; Szathmáry, Eörs

    2011-02-21

    It is supposed that humans are genetically predisposed to be able to recognize sequences of context-free grammars with centre-embedded recursion while other primates are restricted to the recognition of finite state grammars with tail-recursion. Our aim was to construct a minimalist neural network that is able to parse artificial sentences of both grammars in an efficient way without using the biologically unrealistic backpropagation algorithm. The core of this network is a neural stack-like memory where the push and pop operations are regulated by synaptic gating on the connections between the layers of the stack. The network correctly categorizes novel sentences of both grammars after training. We suggest that the introduction of the neural stack memory will turn out to be substantial for any biological 'hierarchical processor' and the minimalist design of the model suggests a quest for similar, realistic neural architectures.

  5. Short-Term Wind Speed Forecast Based on B-Spline Neural Network Optimized by PSO

    Directory of Open Access Journals (Sweden)

    Zhongqiang Wu

    2015-01-01

    Full Text Available Considering the randomness and volatility of wind, a method based on B-spline neural network optimized by particle swarm optimization is proposed to predict the short-term wind speed. The B-spline neural network can change the division of input space and the definition of basis function flexibly. For any input, only a few outputs of hidden layers are nonzero, the outputs are simple, and the convergence speed is fast, but it is easy to fall into local minimum. The traditional method to divide the input space is thoughtless and it will influence the final prediction accuracy. Particle swarm optimization is adopted to solve the problem by optimizing the nodes. Simulated results show that it has higher prediction accuracy than traditional B-spline neural network and BP neural network.

  6. Fundamentals of computational intelligence neural networks, fuzzy systems, and evolutionary computation

    CERN Document Server

    Keller, James M; Fogel, David B

    2016-01-01

    This book covers the three fundamental topics that form the basis of computational intelligence: neural networks, fuzzy systems, and evolutionary computation. The text focuses on inspiration, design, theory, and practical aspects of implementing procedures to solve real-world problems. While other books in the three fields that comprise computational intelligence are written by specialists in one discipline, this book is co-written by current former Editor-in-Chief of IEEE Transactions on Neural Networks and Learning Systems, a former Editor-in-Chief of IEEE Transactions on Fuzzy Systems, and the founding Editor-in-Chief of IEEE Transactions on Evolutionary Computation. The coverage across the three topics is both uniform and consistent in style and notation. Discusses single-layer and multilayer neural networks, radial-basi function networks, and recurrent neural networks Covers fuzzy set theory, fuzzy relations, fuzzy logic interference, fuzzy clustering and classification, fuzzy measures and fuzz...

  7. Chaos Synchronization Using Adaptive Dynamic Neural Network Controller with Variable Learning Rates

    Directory of Open Access Journals (Sweden)

    Chih-Hong Kao

    2011-01-01

    Full Text Available This paper addresses the synchronization of chaotic gyros with unknown parameters and external disturbance via an adaptive dynamic neural network control (ADNNC system. The proposed ADNNC system is composed of a neural controller and a smooth compensator. The neural controller uses a dynamic RBF (DRBF network to online approximate an ideal controller. The DRBF network can create new hidden neurons online if the input data falls outside the hidden layer and prune the insignificant hidden neurons online if the hidden neuron is inappropriate. The smooth compensator is designed to compensate for the approximation error between the neural controller and the ideal controller. Moreover, the variable learning rates of the parameter adaptation laws are derived based on a discrete-type Lyapunov function to speed up the convergence rate of the tracking error. Finally, the simulation results which verified the chaotic behavior of two nonlinear identical chaotic gyros can be synchronized using the proposed ADNNC scheme.

  8. Kalman filtering for neural prediction of response spectra from mining tremors

    Energy Technology Data Exchange (ETDEWEB)

    Krok, A.; Waszczyszyn, Z. [Cracow University of Technology, Krakow (Poland)

    2007-08-15

    Acceleration response spectra (ARS) for mining tremors in the Upper Silesian Coalfield, Poland are generated using neural networks trained by means of Kalman filtering. The target ARS were computed on the base of measured accelerograms. It was proved that the standard feed-forward, layered neural network, trained by the DEFK (decoupled extended Kalman filter) algorithm is numerically much less efficient than the standard recurrent NN learnt by Recurrent DEKF, cf. (Haykin S, (editor). Kalman filtering and neural networks. New York: John Wiley & Sons; 2001). It is also shown that the studied KF algorithms are better than the traditional Resilient-Propagation learning method. The improvement of the training process and neural prediction due to introduction of an autoregressive input is also discussed in the paper.

  9. Modeling and prediction of Turkey's electricity consumption using Artificial Neural Networks

    Energy Technology Data Exchange (ETDEWEB)

    Kavaklioglu, Kadir; Ozturk, Harun Kemal; Canyurt, Olcay Ersel [Pamukkale University, Mechanical Engineering Department, Denizli (Turkey); Ceylan, Halim [Pamukkale University, Civil Engineering Department, Denizli (Turkey)

    2009-11-15

    Artificial Neural Networks are proposed to model and predict electricity consumption of Turkey. Multi layer perceptron with backpropagation training algorithm is used as the neural network topology. Tangent-sigmoid and pure-linear transfer functions are selected in the hidden and output layer processing elements, respectively. These input-output network models are a result of relationships that exist among electricity consumption and several other socioeconomic variables. Electricity consumption is modeled as a function of economic indicators such as population, gross national product, imports and exports. It is also modeled using export-import ratio and time input only. Performance comparison among different models is made based on absolute and percentage mean square error. Electricity consumption of Turkey is predicted until 2027 using data from 1975 to 2006 along with other economic indicators. The results show that electricity consumption can be modeled using Artificial Neural Networks, and the models can be used to predict future electricity consumption. (author)

  10. Entropy-based generation of supervised neural networks for classification of structured patterns.

    Science.gov (United States)

    Tsai, Hsien-Leing; Lee, Shie-Jue

    2004-03-01

    Sperduti and Starita proposed a new type of neural network which consists of generalized recursive neurons for classification of structures. In this paper, we propose an entropy-based approach for constructing such neural networks for classification of acyclic structured patterns. Given a classification problem, the architecture, i.e., the number of hidden layers and the number of neurons in each hidden layer, and all the values of the link weights associated with the corresponding neural network are automatically determined. Experimental results have shown that the networks constructed by our method can have a better performance, with respect to network size, learning speed, or recognition accuracy, than the networks obtained by other methods.

  11. The holographic neural network: Performance comparison with other neural networks

    Science.gov (United States)

    Klepko, Robert

    1991-10-01

    The artificial neural network shows promise for use in recognition of high resolution radar images of ships. The holographic neural network (HNN) promises a very large data storage capacity and excellent generalization capability, both of which can be achieved with only a few learning trials, unlike most neural networks which require on the order of thousands of learning trials. The HNN is specially designed for pattern association storage, and mathematically realizes the storage and retrieval mechanisms of holograms. The pattern recognition capability of the HNN was studied, and its performance was compared with five other commonly used neural networks: the Adaline, Hamming, bidirectional associative memory, recirculation, and back propagation networks. The patterns used for testing represented artificial high resolution radar images of ships, and appear as a two dimensional topology of peaks with various amplitudes. The performance comparisons showed that the HNN does not perform as well as the other neural networks when using the same test data. However, modification of the data to make it appear more Gaussian distributed, improved the performance of the network. The HNN performs best if the data is completely Gaussian distributed.

  12. Fractional Hopfield Neural Networks: Fractional Dynamic Associative Recurrent Neural Networks.

    Science.gov (United States)

    Pu, Yi-Fei; Yi, Zhang; Zhou, Ji-Liu

    2016-07-14

    This paper mainly discusses a novel conceptual framework: fractional Hopfield neural networks (FHNN). As is commonly known, fractional calculus has been incorporated into artificial neural networks, mainly because of its long-term memory and nonlocality. Some researchers have made interesting attempts at fractional neural networks and gained competitive advantages over integer-order neural networks. Therefore, it is naturally makes one ponder how to generalize the first-order Hopfield neural networks to the fractional-order ones, and how to implement FHNN by means of fractional calculus. We propose to introduce a novel mathematical method: fractional calculus to implement FHNN. First, we implement fractor in the form of an analog circuit. Second, we implement FHNN by utilizing fractor and the fractional steepest descent approach, construct its Lyapunov function, and further analyze its attractors. Third, we perform experiments to analyze the stability and convergence of FHNN, and further discuss its applications to the defense against chip cloning attacks for anticounterfeiting. The main contribution of our work is to propose FHNN in the form of an analog circuit by utilizing a fractor and the fractional steepest descent approach, construct its Lyapunov function, prove its Lyapunov stability, analyze its attractors, and apply FHNN to the defense against chip cloning attacks for anticounterfeiting. A significant advantage of FHNN is that its attractors essentially relate to the neuron's fractional order. FHNN possesses the fractional-order-stability and fractional-order-sensitivity characteristics.

  13. Multi-layers castings

    Directory of Open Access Journals (Sweden)

    J. Szajnar

    2010-01-01

    Full Text Available In paper is presented the possibility of making of multi-layers cast steel castings in result of connection of casting and welding coating technologies. First layer was composite surface layer on the basis of Fe-Cr-C alloy, which was put directly in founding process of cast carbon steel 200–450 with use of preparation of mould cavity method. Second layer were padding welds, which were put with use of TIG – Tungsten Inert Gas surfacing by welding technology with filler on Ni matrix, Ni and Co matrix with wolfram carbides WC and on the basis on Fe-Cr-C alloy, which has the same chemical composition with alloy, which was used for making of composite surface layer. Usability for industrial applications of surface layers of castings were estimated by criterion of hardness and abrasive wear resistance of type metal-mineral.

  14. Artificial neural network modeling and optimization of ultrahigh pressure extraction of green tea polyphenols.

    Science.gov (United States)

    Xi, Jun; Xue, Yujing; Xu, Yinxiang; Shen, Yuhong

    2013-11-01

    In this study, the ultrahigh pressure extraction of green tea polyphenols was modeled and optimized by a three-layer artificial neural network. A feed-forward neural network trained with an error back-propagation algorithm was used to evaluate the effects of pressure, liquid/solid ratio and ethanol concentration on the total phenolic content of green tea extracts. The neural network coupled with genetic algorithms was also used to optimize the conditions needed to obtain the highest yield of tea polyphenols. The obtained optimal architecture of artificial neural network model involved a feed-forward neural network with three input neurons, one hidden layer with eight neurons and one output layer including single neuron. The trained network gave the minimum value in the MSE of 0.03 and the maximum value in the R(2) of 0.9571, which implied a good agreement between the predicted value and the actual value, and confirmed a good generalization of the network. Based on the combination of neural network and genetic algorithms, the optimum extraction conditions for the highest yield of green tea polyphenols were determined as follows: 498.8 MPa for pressure, 20.8 mL/g for liquid/solid ratio and 53.6% for ethanol concentration. The total phenolic content of the actual measurement under the optimum predicated extraction conditions was 582.4 ± 0.63 mg/g DW, which was well matched with the predicted value (597.2mg/g DW). This suggests that the artificial neural network model described in this work is an efficient quantitative tool to predict the extraction efficiency of green tea polyphenols. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  15. Model of Information Security Risk Assessment based on Improved Wavelet Neural Network

    Directory of Open Access Journals (Sweden)

    Gang Chen

    2013-09-01

    Full Text Available This paper concentrates on the information security risk assessment model utilizing the improved wavelet neural network. The structure of wavelet neural network is similar to the multi-layer neural network, which is a feed-forward neural network with one or more inputs. Afterwards, we point out that the training process of wavelet neural networks is made up of four steps until the value of error function can satisfy a pre-defined error criteria. In order to enhance the quality of information security risk assessment, we proposed a modified version of wavelet neural network which can effectively combine all influencing factors in assessing information security risk by linear integrating several weights. Furthermore, the proposed wavelet neural network is trained by the BP algorithm with batch mode, and the weight coefficients of the wavelet are modified with the adopting mode. Finally, a series of experiments are conduct to make performance evaluation. From the experimental results, we can see that the proposed model can assess information security risk accurately and rapidly

  16. The Chebyshev-polynomials-based unified model neural networks for function approximation.

    Science.gov (United States)

    Lee, T T; Jeng, J T

    1998-01-01

    In this paper, we propose the approximate transformable technique, which includes the direct transformation and indirect transformation, to obtain a Chebyshev-Polynomials-Based (CPB) unified model neural networks for feedforward/recurrent neural networks via Chebyshev polynomials approximation. Based on this approximate transformable technique, we have derived the relationship between the single-layer neural networks and multilayer perceptron neural networks. It is shown that the CPB unified model neural networks can be represented as a functional link networks that are based on Chebyshev polynomials, and those networks use the recursive least square method with forgetting factor as learning algorithm. It turns out that the CPB unified model neural networks not only has the same capability of universal approximator, but also has faster learning speed than conventional feedforward/recurrent neural networks. Furthermore, we have also derived the condition such that the unified model generating by Chebyshev polynomials is optimal in the sense of error least square approximation in the single variable ease. Computer simulations show that the proposed method does have the capability of universal approximator in some functional approximation with considerable reduction in learning time.

  17. Adiabatic superconducting cells for ultra-low-power artificial neural networks.

    Science.gov (United States)

    Schegolev, Andrey E; Klenov, Nikolay V; Soloviev, Igor I; Tereshonok, Maxim V

    2016-01-01

    We propose the concept of using superconducting quantum interferometers for the implementation of neural network algorithms with extremely low power dissipation. These adiabatic elements are Josephson cells with sigmoid- and Gaussian-like activation functions. We optimize their parameters for application in three-layer perceptron and radial basis function networks.

  18. Adiabatic superconducting cells for ultra-low-power artificial neural networks

    Directory of Open Access Journals (Sweden)

    Andrey E. Schegolev

    2016-10-01

    Full Text Available We propose the concept of using superconducting quantum interferometers for the implementation of neural network algorithms with extremely low power dissipation. These adiabatic elements are Josephson cells with sigmoid- and Gaussian-like activation functions. We optimize their parameters for application in three-layer perceptron and radial basis function networks.

  19. Folk music style modelling by recurrent neural networks with long short term memory units

    OpenAIRE

    Sturm, Bob; Santos, João Felipe; Korshunova, Iryna

    2015-01-01

    We demonstrate two generative models created by training a recurrent neural network (RNN) with three hidden layers of long short-term memory (LSTM) units. This extends past work in numerous directions, including training deeper models with nearly 24,000 high-level transcriptions of folk tunes. We discuss our on-going work.

  20. Predicting Item Difficulty in a Reading Comprehension Test with an Artificial Neural Network.

    Science.gov (United States)

    Perkins, Kyle; And Others

    1995-01-01

    This article reports the results of using a three-layer back propagation artificial neural network to predict item difficulty in a reading comprehension test. Three classes of variables were examined: text structure, propositional analysis, and cognitive demand. Results demonstrate that the networks can consistently predict item difficulty. (JL)