WorldWideScience

Sample records for hierarchical neural network

  1. Modular, Hierarchical Learning By Artificial Neural Networks

    Science.gov (United States)

    Baldi, Pierre F.; Toomarian, Nikzad

    1996-01-01

    Modular and hierarchical approach to supervised learning by artificial neural networks leads to neural networks more structured than neural networks in which all neurons fully interconnected. These networks utilize general feedforward flow of information and sparse recurrent connections to achieve dynamical effects. The modular organization, sparsity of modular units and connections, and fact that learning is much more circumscribed are all attractive features for designing neural-network hardware. Learning streamlined by imitating some aspects of biological neural networks.

  2. Hierarchical Neural Network Structures for Phoneme Recognition

    CERN Document Server

    Vasquez, Daniel; Minker, Wolfgang

    2013-01-01

    In this book, hierarchical structures based on neural networks are investigated for automatic speech recognition. These structures are evaluated on the phoneme recognition task where a  Hybrid Hidden Markov Model/Artificial Neural Network paradigm is used. The baseline hierarchical scheme consists of two levels each which is based on a Multilayered Perceptron. Additionally, the output of the first level serves as a second level input. The computational speed of the phoneme recognizer can be substantially increased by removing redundant information still contained at the first level output. Several techniques based on temporal and phonetic criteria have been investigated to remove this redundant information. The computational time could be reduced by 57% whilst keeping the system accuracy comparable to the baseline hierarchical approach.

  3. Neural Mechanisms of Hierarchical Planning in a Virtual Subway Network.

    Science.gov (United States)

    Balaguer, Jan; Spiers, Hugo; Hassabis, Demis; Summerfield, Christopher

    2016-05-18

    Planning allows actions to be structured in pursuit of a future goal. However, in natural environments, planning over multiple possible future states incurs prohibitive computational costs. To represent plans efficiently, states can be clustered hierarchically into "contexts". For example, representing a journey through a subway network as a succession of individual states (stations) is more costly than encoding a sequence of contexts (lines) and context switches (line changes). Here, using functional brain imaging, we asked humans to perform a planning task in a virtual subway network. Behavioral analyses revealed that humans executed a hierarchically organized plan. Brain activity in the dorsomedial prefrontal cortex and premotor cortex scaled with the cost of hierarchical plan representation and unique neural signals in these regions signaled contexts and context switches. These results suggest that humans represent hierarchical plans using a network of caudal prefrontal structures. VIDEO ABSTRACT.

  4. Learning Contextual Dependence With Convolutional Hierarchical Recurrent Neural Networks

    Science.gov (United States)

    Zuo, Zhen; Shuai, Bing; Wang, Gang; Liu, Xiao; Wang, Xingxing; Wang, Bing; Chen, Yushi

    2016-07-01

    Existing deep convolutional neural networks (CNNs) have shown their great success on image classification. CNNs mainly consist of convolutional and pooling layers, both of which are performed on local image areas without considering the dependencies among different image regions. However, such dependencies are very important for generating explicit image representation. In contrast, recurrent neural networks (RNNs) are well known for their ability of encoding contextual information among sequential data, and they only require a limited number of network parameters. General RNNs can hardly be directly applied on non-sequential data. Thus, we proposed the hierarchical RNNs (HRNNs). In HRNNs, each RNN layer focuses on modeling spatial dependencies among image regions from the same scale but different locations. While the cross RNN scale connections target on modeling scale dependencies among regions from the same location but different scales. Specifically, we propose two recurrent neural network models: 1) hierarchical simple recurrent network (HSRN), which is fast and has low computational cost; and 2) hierarchical long-short term memory recurrent network (HLSTM), which performs better than HSRN with the price of more computational cost. In this manuscript, we integrate CNNs with HRNNs, and develop end-to-end convolutional hierarchical recurrent neural networks (C-HRNNs). C-HRNNs not only make use of the representation power of CNNs, but also efficiently encodes spatial and scale dependencies among different image regions. On four of the most challenging object/scene image classification benchmarks, our C-HRNNs achieve state-of-the-art results on Places 205, SUN 397, MIT indoor, and competitive results on ILSVRC 2012.

  5. SOFM Neural Network Based Hierarchical Topology Control for Wireless Sensor Networks

    OpenAIRE

    2014-01-01

    Well-designed network topology provides vital support for routing, data fusion, and target tracking in wireless sensor networks (WSNs). Self-organization feature map (SOFM) neural network is a major branch of artificial neural networks, which has self-organizing and self-learning features. In this paper, we propose a cluster-based topology control algorithm for WSNs, named SOFMHTC, which uses SOFM neural network to form a hierarchical network structure, completes cluster head selection by the...

  6. Hierarchical modular granular neural networks with fuzzy aggregation

    CERN Document Server

    Sanchez, Daniela

    2016-01-01

    In this book, a new method for hybrid intelligent systems is proposed. The proposed method is based on a granular computing approach applied in two levels. The techniques used and combined in the proposed method are modular neural networks (MNNs) with a Granular Computing (GrC) approach, thus resulting in a new concept of MNNs; modular granular neural networks (MGNNs). In addition fuzzy logic (FL) and hierarchical genetic algorithms (HGAs) are techniques used in this research work to improve results. These techniques are chosen because in other works have demonstrated to be a good option, and in the case of MNNs and HGAs, these techniques allow to improve the results obtained than with their conventional versions; respectively artificial neural networks and genetic algorithms.

  7. Resolution of Singularities Introduced by Hierarchical Structure in Deep Neural Networks.

    Science.gov (United States)

    Nitta, Tohru

    2016-06-30

    We present a theoretical analysis of singular points of artificial deep neural networks, resulting in providing deep neural network models having no critical points introduced by a hierarchical structure. It is considered that such deep neural network models have good nature for gradient-based optimization. First, we show that there exist a large number of critical points introduced by a hierarchical structure in deep neural networks as straight lines, depending on the number of hidden layers and the number of hidden neurons. Second, we derive a sufficient condition for deep neural networks having no critical points introduced by a hierarchical structure, which can be applied to general deep neural networks. It is also shown that the existence of critical points introduced by a hierarchical structure is determined by the rank and the regularity of weight matrices for a specific class of deep neural networks. Finally, two kinds of implementation methods of the sufficient conditions to have no critical points are provided. One is a learning algorithm that can avoid critical points introduced by the hierarchical structure during learning (called avoidant learning algorithm). The other is a neural network that does not have some critical points introduced by the hierarchical structure as an inherent property (called avoidant neural network).

  8. A Tool for Fast Development of Modular and Hierarchic Neural Network-based Systems

    Directory of Open Access Journals (Sweden)

    Francisco Reinaldo

    2006-08-01

    Full Text Available This paper presents PyramidNet tool as a fast and easy way to develop Modular and Hierarchic Neural Network-based Systems. This tool facilitates the fast emergence of autonomous behaviors in agents because it uses a hierarchic and modular control methodology of heterogeneous learning modules: the pyramid. Using the graphical resources of PyramidNet the user is able to specify a behavior system even having little understanding of artificial neural networks. Experimental tests have shown that a very significant speedup is attained in the development of modular and hierarchic neural network-based systems by using this tool.

  9. Natural forest conservation hierarchical program with neural network

    Institute of Scientific and Technical Information of China (English)

    LUO Chuanwen; LI Jihong

    2006-01-01

    In this paper,the implementing steps of a natural forest protection program grading (NFPPG) with neural network (NN) were summarized and the concepts of program illustration,patch sign unification and regression,and inclining factor were set forth.Employing Arc/Info GIS,the tree species diversity and rarity,disturbance degree,protection of channel system,and classification management in the Maoershan National Forest Park were described,and used as the input factors of NN.The relationships between NFPPG and above factors were also analyzed.By artificially determining training samples,the NFPPG of Moershan National Forest Park was created.Tested with all patches in the park,the generalization of NFPPG was satisfied.NFPPG took both the classification management and the protection of forest community types into account,as well as the ecological environment.The excitation function of NFPPG was not seriously saturated,indicating the leading effect of the inclining factor on the network optimization.

  10. Sustained Activity in Hierarchical Modular Neural Networks: Self-Organized Criticality and Oscillations

    Science.gov (United States)

    Wang, Sheng-Jun; Hilgetag, Claus C.; Zhou, Changsong

    2010-01-01

    Cerebral cortical brain networks possess a number of conspicuous features of structure and dynamics. First, these networks have an intricate, non-random organization. In particular, they are structured in a hierarchical modular fashion, from large-scale regions of the whole brain, via cortical areas and area subcompartments organized as structural and functional maps to cortical columns, and finally circuits made up of individual neurons. Second, the networks display self-organized sustained activity, which is persistent in the absence of external stimuli. At the systems level, such activity is characterized by complex rhythmical oscillations over a broadband background, while at the cellular level, neuronal discharges have been observed to display avalanches, indicating that cortical networks are at the state of self-organized criticality (SOC). We explored the relationship between hierarchical neural network organization and sustained dynamics using large-scale network modeling. Previously, it was shown that sparse random networks with balanced excitation and inhibition can sustain neural activity without external stimulation. We found that a hierarchical modular architecture can generate sustained activity better than random networks. Moreover, the system can simultaneously support rhythmical oscillations and SOC, which are not present in the respective random networks. The mechanism underlying the sustained activity is that each dense module cannot sustain activity on its own, but displays SOC in the presence of weak perturbations. Therefore, the hierarchical modular networks provide the coupling among subsystems with SOC. These results imply that the hierarchical modular architecture of cortical networks plays an important role in shaping the ongoing spontaneous activity of the brain, potentially allowing the system to take advantage of both the sensitivity of critical states and the predictability and timing of oscillations for efficient information

  11. Sustained activity in hierarchical modular neural networks: self-organized criticality and oscillations

    Directory of Open Access Journals (Sweden)

    Sheng-Jun Wang

    2011-06-01

    Full Text Available Cerebral cortical brain networks possess a number of conspicuous features of structure and dynamics. First, these networks have an intricate, non-random organization. They are structured in a hierarchical modular fashion, from large-scale regions of the whole brain, via cortical areas and area subcompartments organized as structural and functional maps to cortical columns, and finally circuits made up of individual neurons. Second, the networks display self-organized sustained activity, which is persistent in the absence of external stimuli. At the systems level, such activity is characterized by complex rhythmical oscillations over a broadband background, while at the cellular level, neuronal discharges have been observed to display avalanches, indicating that cortical networks are at the state of self-organized criticality. We explored the relationship between hierarchical neural network organization and sustained dynamics using large-scale network modeling. It was shown that sparse random networks with balanced excitation and inhibition can sustain neural activity without external stimulation. We find that a hierarchical modular architecture can generate sustained activity better than random networks. Moreover, the system can simultaneously support rhythmical oscillations and self-organized criticality, which are not present in the respective random networks. The underlying mechanism is that each dense module cannot sustain activity on its own, but displays self-organized criticality in the presence of weak perturbations. The hierarchical modular networks provide the coupling among subsystems with self-organized criticality. These results imply that the hierarchical modular architecture of cortical networks plays an important role in shaping the ongoing spontaneous activity of the brain, potentially allowing the system to take advantage of both the sensitivityof critical state and predictability and timing of oscillations for efficient

  12. Sustained activity in hierarchical modular neural networks: self-organized criticality and oscillations.

    Science.gov (United States)

    Wang, Sheng-Jun; Hilgetag, Claus C; Zhou, Changsong

    2011-01-01

    Cerebral cortical brain networks possess a number of conspicuous features of structure and dynamics. First, these networks have an intricate, non-random organization. In particular, they are structured in a hierarchical modular fashion, from large-scale regions of the whole brain, via cortical areas and area subcompartments organized as structural and functional maps to cortical columns, and finally circuits made up of individual neurons. Second, the networks display self-organized sustained activity, which is persistent in the absence of external stimuli. At the systems level, such activity is characterized by complex rhythmical oscillations over a broadband background, while at the cellular level, neuronal discharges have been observed to display avalanches, indicating that cortical networks are at the state of self-organized criticality (SOC). We explored the relationship between hierarchical neural network organization and sustained dynamics using large-scale network modeling. Previously, it was shown that sparse random networks with balanced excitation and inhibition can sustain neural activity without external stimulation. We found that a hierarchical modular architecture can generate sustained activity better than random networks. Moreover, the system can simultaneously support rhythmical oscillations and SOC, which are not present in the respective random networks. The mechanism underlying the sustained activity is that each dense module cannot sustain activity on its own, but displays SOC in the presence of weak perturbations. Therefore, the hierarchical modular networks provide the coupling among subsystems with SOC. These results imply that the hierarchical modular architecture of cortical networks plays an important role in shaping the ongoing spontaneous activity of the brain, potentially allowing the system to take advantage of both the sensitivity of critical states and the predictability and timing of oscillations for efficient information

  13. Identifying time-delayed gene regulatory networks via an evolvable hierarchical recurrent neural network.

    Science.gov (United States)

    Kordmahalleh, Mina Moradi; Sefidmazgi, Mohammad Gorji; Harrison, Scott H; Homaifar, Abdollah

    2017-01-01

    The modeling of genetic interactions within a cell is crucial for a basic understanding of physiology and for applied areas such as drug design. Interactions in gene regulatory networks (GRNs) include effects of transcription factors, repressors, small metabolites, and microRNA species. In addition, the effects of regulatory interactions are not always simultaneous, but can occur after a finite time delay, or as a combined outcome of simultaneous and time delayed interactions. Powerful biotechnologies have been rapidly and successfully measuring levels of genetic expression to illuminate different states of biological systems. This has led to an ensuing challenge to improve the identification of specific regulatory mechanisms through regulatory network reconstructions. Solutions to this challenge will ultimately help to spur forward efforts based on the usage of regulatory network reconstructions in systems biology applications. We have developed a hierarchical recurrent neural network (HRNN) that identifies time-delayed gene interactions using time-course data. A customized genetic algorithm (GA) was used to optimize hierarchical connectivity of regulatory genes and a target gene. The proposed design provides a non-fully connected network with the flexibility of using recurrent connections inside the network. These features and the non-linearity of the HRNN facilitate the process of identifying temporal patterns of a GRN. Our HRNN method was implemented with the Python language. It was first evaluated on simulated data representing linear and nonlinear time-delayed gene-gene interaction models across a range of network sizes and variances of noise. We then further demonstrated the capability of our method in reconstructing GRNs of the Saccharomyces cerevisiae synthetic network for in vivo benchmarking of reverse-engineering and modeling approaches (IRMA). We compared the performance of our method to TD-ARACNE, HCC-CLINDE, TSNI and ebdbNet across different network

  14. A special hierarchical fuzzy neural-networks based reinforcement learning for multi-variables system

    Institute of Scientific and Technical Information of China (English)

    ZHANG Wen-zhi; LU Tian-sheng

    2005-01-01

    Proposes a reinforcement learning scheme based on a special Hierarchical Fuzzy Neural-Networks (HFNN) for solving complicated learning tasks in a continuous multi-variables environment. The output of the previous layer in the HFNN is no longer used as if-part of the next layer, but used only in then-part. Thus it can deal with the difficulty when the output of the previous layer is meaningless or its meaning is uncertain. The proposed HFNN has a minimal number of fuzzy rules and can successfully solve the problem of rules combination explosion and decrease the quantity of computation and memory requirement. In the learning process, two HFNN with the same structure perform fuzzy action composition and evaluation function approximation simultaneously where the parameters of neural-networks are tuned and updated on line by using gradient descent algorithm. The reinforcement learning method is proved to be correct and feasible by simulation of a double inverted pendulum system.

  15. Hierarchical Neural Networks Method for Fault Diagnosis of Large-Scale Analog Circuits

    Institute of Scientific and Technical Information of China (English)

    TAN Yanghong; HE Yigang; FANG Gefeng

    2007-01-01

    A novel hierarchical neural networks (HNNs) method for fault diagnosis of large-scale circuits is proposed. The presented techniques using neural networks(NNs) approaches require a large amount of computation for simulating various faulty component possibilities. For large scale circuits, the number of possible faults, and hence the simulations, grow rapidly and become tedious and sometimes even impractical. Some NNs are distributed to the torn sub-blocks according to the proposed torn principles of large scale circuits. And the NNs are trained in batches by different patterns in the light of the presented rules of various patterns when the DC, AC and transient responses of the circuit are available. The method is characterized by decreasing the over-lapped feasible domains of responses of circuits with tolerance and leads to better performance and higher correct classification. The methodology is illustrated by means of diagnosis examples.

  16. APPLICATION OF NEURAL NETWORK WITH MULTI-HIERARCHIC STRUCTURE TO EVALUATE SUSTAINABLE DEVELOPMENT OF THE COAL MINES

    Institute of Scientific and Technical Information of China (English)

    李新春; 陶学禹

    2000-01-01

    The neural network with multi-hierarchic structure is provided in this paper to evaluate sustainable development of the coal mines based on analyzing its effect factors. The whole evaluating system is composed of 5 neural networks.The feasibility of this method has been proved by case study. This study will provide a scientfic and theoretic foundation for evaluating the sustainable development of coal mines.

  17. Hierarchical neural network model of the visual system determining figure/ground relation

    Science.gov (United States)

    Kikuchi, Masayuki

    2017-07-01

    One of the most important functions of the visual perception in the brain is figure/ground interpretation from input images. Figural region in 2D image corresponding to object in 3D space are distinguished from background region extended behind the object. Previously the author proposed a neural network model of figure/ground separation constructed on the standpoint that local geometric features such as curvatures and outer angles at corners are extracted and propagated along input contour in a single layer network (Kikuchi & Akashi, 2001). However, such a processing principle has the defect that signal propagation requires manyiterations despite the fact that actual visual system determines figure/ground relation within the short period (Zhou et al., 2000). In order to attain speed-up for determining figure/ground, this study incorporates hierarchical architecture into the previous model. This study confirmed the effect of the hierarchization as for the computation time by simulation. As the number of layers increased, the required computation time reduced. However, such speed-up effect was saturatedas the layers increased to some extent. This study attempted to explain this saturation effect by the notion of average distance between vertices in the area of complex network, and succeeded to mimic the saturation effect by computer simulation.

  18. Emergence of hierarchical structure mirroring linguistic composition in a recurrent neural network.

    Science.gov (United States)

    Hinoshita, Wataru; Arie, Hiroaki; Tani, Jun; Okuno, Hiroshi G; Ogata, Tetsuya

    2011-05-01

    We show that a Multiple Timescale Recurrent Neural Network (MTRNN) can acquire the capabilities to recognize, generate, and correct sentences by self-organizing in a way that mirrors the hierarchical structure of sentences: characters grouped into words, and words into sentences. The model can control which sentence to generate depending on its initial states (generation phase) and the initial states can be calculated from the target sentence (recognition phase). In an experiment, we trained our model over a set of unannotated sentences from an artificial language, represented as sequences of characters. Once trained, the model could recognize and generate grammatical sentences, even if they were not learned. Moreover, we found that our model could correct a few substitution errors in a sentence, and the correction performance was improved by adding the errors to the training sentences in each training iteration with a certain probability. An analysis of the neural activations in our model revealed that the MTRNN had self-organized, reflecting the hierarchical linguistic structure by taking advantage of the differences in timescale among its neurons: in particular, neurons that change the fastest represented "characters", those that change more slowly, "words", and those that change the slowest, "sentences".

  19. Clustering-based classification of road traffic accidents using hierarchical clustering and artificial neural networks.

    Science.gov (United States)

    Taamneh, Madhar; Taamneh, Salah; Alkheder, Sharaf

    2017-09-01

    Artificial neural networks (ANNs) have been widely used in predicting the severity of road traffic crashes. All available information about previously occurred accidents is typically used for building a single prediction model (i.e., classifier). Too little attention has been paid to the differences between these accidents, leading, in most cases, to build less accurate predictors. Hierarchical clustering is a well-known clustering method that seeks to group data by creating a hierarchy of clusters. Using hierarchical clustering and ANNs, a clustering-based classification approach for predicting the injury severity of road traffic accidents was proposed. About 6000 road accidents occurred over a six-year period from 2008 to 2013 in Abu Dhabi were used throughout this study. In order to reduce the amount of variation in data, hierarchical clustering was applied on the data set to organize it into six different forms, each with different number of clusters (i.e., clusters from 1 to 6). Two ANN models were subsequently built for each cluster of accidents in each generated form. The first model was built and validated using all accidents (training set), whereas only 66% of the accidents were used to build the second model, and the remaining 34% were used to test it (percentage split). Finally, the weighted average accuracy was computed for each type of models in each from of data. The results show that when testing the models using the training set, clustering prior to classification achieves (11%-16%) more accuracy than without using clustering, while the percentage split achieves (2%-5%) more accuracy. The results also suggest that partitioning the accidents into six clusters achieves the best accuracy if both types of models are taken into account.

  20. Hierarchical Network Design

    DEFF Research Database (Denmark)

    Thomadsen, Tommy

    2005-01-01

    of different types of hierarchical networks. This is supplemented by a review of ring network design problems and a presentation of a model allowing for modeling most hierarchical networks. We use methods based on linear programming to design the hierarchical networks. Thus, a brief introduction to the various....... The thesis investigates models for hierarchical network design and methods used to design such networks. In addition, ring network design is considered, since ring networks commonly appear in the design of hierarchical networks. The thesis introduces hierarchical networks, including a classification scheme...... linear programming based methods is included. The thesis is thus suitable as a foundation for study of design of hierarchical networks. The major contribution of the thesis consists of seven papers which are included in the appendix. The papers address hierarchical network design and/or ring network...

  1. Hierarchical Network Design

    DEFF Research Database (Denmark)

    Thomadsen, Tommy

    2005-01-01

    Communication networks are immensely important today, since both companies and individuals use numerous services that rely on them. This thesis considers the design of hierarchical (communication) networks. Hierarchical networks consist of layers of networks and are well-suited for coping...... the clusters. The design of hierarchical networks involves clustering of nodes, hub selection, and network design, i.e. selection of links and routing of ows. Hierarchical networks have been in use for decades, but integrated design of these networks has only been considered for very special types of networks....... The thesis investigates models for hierarchical network design and methods used to design such networks. In addition, ring network design is considered, since ring networks commonly appear in the design of hierarchical networks. The thesis introduces hierarchical networks, including a classification scheme...

  2. Synchronization of chaotic systems and identification of nonlinear systems by using recurrent hierarchical type-2 fuzzy neural networks.

    Science.gov (United States)

    Mohammadzadeh, Ardashir; Ghaemi, Sehraneh

    2015-09-01

    This paper proposes a novel approach for training of proposed recurrent hierarchical interval type-2 fuzzy neural networks (RHT2FNN) based on the square-root cubature Kalman filters (SCKF). The SCKF algorithm is used to adjust the premise part of the type-2 FNN and the weights of defuzzification and the feedback weights. The recurrence property in the proposed network is the output feeding of each membership function to itself. The proposed RHT2FNN is employed in the sliding mode control scheme for the synchronization of chaotic systems. Unknown functions in the sliding mode control approach are estimated by RHT2FNN. Another application of the proposed RHT2FNN is the identification of dynamic nonlinear systems. The effectiveness of the proposed network and its learning algorithm is verified by several simulation examples. Furthermore, the universal approximation of RHT2FNNs is also shown.

  3. Soft sensor of chemical processes with large numbers of input parameters using auto-associative hierarchical neural network

    Institute of Scientific and Technical Information of China (English)

    Yanlin He; Yuan Xu; Zhiqiang Geng; Qunxiong Zhu

    2015-01-01

    To explore the problems of monitoring chemical processes with large numbers of input parameters, a method based on Auto-associative Hierarchical Neural Network (AHNN) is proposed. AHNN focuses on dealing with datasets in high-dimension. AHNNs consist of two parts:groups of subnets based on well trained Auto-associative Neural Networks (AANNs) and a main net. The subnets play an important role on the performance of AHNN. A simple but effective method of designing the subnets is developed in this paper. In this method, the subnets are designed according to the classification of the data attributes. For getting the classification, an effective method called Extension Data Attributes Classification (EDAC) is adopted. Soft sensor using AHNN based on EDAC (EDAC-AHNN) is introduced. As a case study, the production data of Purified Terephthalic Acid (PTA) solvent system are selected to examine the proposed model. The results of the EDAC-AHNN model are compared with the experimental data extracted from the literature, which shows the efficiency of the proposed model.

  4. Memory Stacking in Hierarchical Networks.

    Science.gov (United States)

    Westö, Johan; May, Patrick J C; Tiitinen, Hannu

    2016-02-01

    Robust representations of sounds with a complex spectrotemporal structure are thought to emerge in hierarchically organized auditory cortex, but the computational advantage of this hierarchy remains unknown. Here, we used computational models to study how such hierarchical structures affect temporal binding in neural networks. We equipped individual units in different types of feedforward networks with local memory mechanisms storing recent inputs and observed how this affected the ability of the networks to process stimuli context dependently. Our findings illustrate that these local memories stack up in hierarchical structures and hence allow network units to exhibit selectivity to spectral sequences longer than the time spans of the local memories. We also illustrate that short-term synaptic plasticity is a potential local memory mechanism within the auditory cortex, and we show that it can bring robustness to context dependence against variation in the temporal rate of stimuli, while introducing nonlinearities to response profiles that are not well captured by standard linear spectrotemporal receptive field models. The results therefore indicate that short-term synaptic plasticity might provide hierarchically structured auditory cortex with computational capabilities important for robust representations of spectrotemporal patterns.

  5. A CAD System for Identification and Classification of Breast Cancer Tumors in DCE-MR Images Based on Hierarchical Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Reza Rastiboroujeni

    2015-06-01

    Full Text Available In this paper, we propose a computer aided diagnosis (CAD system based on hierarchical convolutional neural networks (HCNNs to discriminate between malignant and benign tumors in breast DCE-MRIs. A HCNN is a hierarchical neural network that operates on two-dimensional images. A HCNN integrates feature extraction and classification processes into one single and fully adaptive structure. It can extract two-dimensional key features automatically, and it is relatively tolerant to geometric and local distortions in input images. We evaluate CNN implementation learning and testing processes based on gradient descent (GD and resilient back-propagation (RPROP approaches. We show that, proposed HCNN with RPROP learning approach provide an effective and robust neural structure to design a CAD base system for breast MRI, and has potential as a mechanism for the evaluation of different types of abnormalities in medical images.

  6. Bistability of mixed states in a neural network storing hierarchical patterns

    Science.gov (United States)

    Toya, Kaname; Fukushima, Kunihiko; Kabashima, Yoshiyuki; Okada, Masato

    2000-04-01

    We discuss the properties of equilibrium states in an autoassociative memory model storing hierarchically correlated patterns (hereafter, hierarchical patterns). We will show that symmetric mixed states (hereafter, mixed states) are bistable on the associative memory model storing the hierarchical patterns in a region of the ferromagnetic phase. This means that the first-order transition occurs in this ferromagnetic phase. We treat these contents with a statistical mechanical method (SCSNA) and by computer simulation. Finally, we discuss a physiological implication of this model. Sugase et al (1999 Nature 400 869) analysed the time-course of the information carried by the firing of face-responsive neurons in the inferior temporal cortex. We also discuss the relation between the theoretical results and the physiological experiments of Sugase et al .

  7. Hebbian learning of hand-centred representations in a hierarchical neural network model of the primate visual system.

    Science.gov (United States)

    Born, Jannis; Galeazzi, Juan M; Stringer, Simon M

    2017-01-01

    A subset of neurons in the posterior parietal and premotor areas of the primate brain respond to the locations of visual targets in a hand-centred frame of reference. Such hand-centred visual representations are thought to play an important role in visually-guided reaching to target locations in space. In this paper we show how a biologically plausible, Hebbian learning mechanism may account for the development of localized hand-centred representations in a hierarchical neural network model of the primate visual system, VisNet. The hand-centered neurons developed in the model use an invariance learning mechanism known as continuous transformation (CT) learning. In contrast to previous theoretical proposals for the development of hand-centered visual representations, CT learning does not need a memory trace of recent neuronal activity to be incorporated in the synaptic learning rule. Instead, CT learning relies solely on a Hebbian learning rule, which is able to exploit the spatial overlap that naturally occurs between successive images of a hand-object configuration as it is shifted across different retinal locations due to saccades. Our simulations show how individual neurons in the network model can learn to respond selectively to target objects in particular locations with respect to the hand, irrespective of where the hand-object configuration occurs on the retina. The response properties of these hand-centred neurons further generalise to localised receptive fields in the hand-centred space when tested on novel hand-object configurations that have not been explored during training. Indeed, even when the network is trained with target objects presented across a near continuum of locations around the hand during training, the model continues to develop hand-centred neurons with localised receptive fields in hand-centred space. With the help of principal component analysis, we provide the first theoretical framework that explains the behavior of Hebbian learning

  8. Application of hierarchical dissociated neural network in closed-loop hybrid system integrating biological and mechanical intelligence.

    Directory of Open Access Journals (Sweden)

    Yongcheng Li

    Full Text Available Neural networks are considered the origin of intelligence in organisms. In this paper, a new design of an intelligent system merging biological intelligence with artificial intelligence was created. It was based on a neural controller bidirectionally connected to an actual mobile robot to implement a novel vehicle. Two types of experimental preparations were utilized as the neural controller including 'random' and '4Q' (cultured neurons artificially divided into four interconnected parts neural network. Compared to the random cultures, the '4Q' cultures presented absolutely different activities, and the robot controlled by the '4Q' network presented better capabilities in search tasks. Our results showed that neural cultures could be successfully employed to control an artificial agent; the robot performed better and better with the stimulus because of the short-term plasticity. A new framework is provided to investigate the bidirectional biological-artificial interface and develop new strategies for a future intelligent system using these simplified model systems.

  9. Application of hierarchical dissociated neural network in closed-loop hybrid system integrating biological and mechanical intelligence.

    Science.gov (United States)

    Li, Yongcheng; Sun, Rong; Zhang, Bin; Wang, Yuechao; Li, Hongyi

    2015-01-01

    Neural networks are considered the origin of intelligence in organisms. In this paper, a new design of an intelligent system merging biological intelligence with artificial intelligence was created. It was based on a neural controller bidirectionally connected to an actual mobile robot to implement a novel vehicle. Two types of experimental preparations were utilized as the neural controller including 'random' and '4Q' (cultured neurons artificially divided into four interconnected parts) neural network. Compared to the random cultures, the '4Q' cultures presented absolutely different activities, and the robot controlled by the '4Q' network presented better capabilities in search tasks. Our results showed that neural cultures could be successfully employed to control an artificial agent; the robot performed better and better with the stimulus because of the short-term plasticity. A new framework is provided to investigate the bidirectional biological-artificial interface and develop new strategies for a future intelligent system using these simplified model systems.

  10. Hierarchical winner-take-all particle swarm optimization social network for neural model fitting.

    Science.gov (United States)

    Coventry, Brandon S; Parthasarathy, Aravindakshan; Sommer, Alexandra L; Bartlett, Edward L

    2017-02-01

    Particle swarm optimization (PSO) has gained widespread use as a general mathematical programming paradigm and seen use in a wide variety of optimization and machine learning problems. In this work, we introduce a new variant on the PSO social network and apply this method to the inverse problem of input parameter selection from recorded auditory neuron tuning curves. The topology of a PSO social network is a major contributor to optimization success. Here we propose a new social network which draws influence from winner-take-all coding found in visual cortical neurons. We show that the winner-take-all network performs exceptionally well on optimization problems with greater than 5 dimensions and runs at a lower iteration count as compared to other PSO topologies. Finally we show that this variant of PSO is able to recreate auditory frequency tuning curves and modulation transfer functions, making it a potentially useful tool for computational neuroscience models.

  11. Learning of invariant object recognition in hierarchical neural networks using temporal continuity

    OpenAIRE

    Lessmann, Markus

    2014-01-01

    Advisor: Rolf P. Würtz, Institute for Neural Computation, Ruhr-University Bochum, Germany. Date and location of PhD thesis defense: 3 November 2014, Ruhr-University Bochum, Germany There has been a lot of progress in the field of invariant object recognition/categorization in the last decade with several methods trying to mimic functioning of the human visual system (e.g. Neocognitron, HMAX, VisNet). Examining those brain regions is a very difficult task with myriads of details to be consi...

  12. An algorithm for generating modular hierarchical neural network classifiers: a step toward larger scale applications

    Science.gov (United States)

    Roverso, Davide

    2003-08-01

    Many-class learning is the problem of training a classifier to discriminate among a large number of target classes. Together with the problem of dealing with high-dimensional patterns (i.e. a high-dimensional input space), the many class problem (i.e. a high-dimensional output space) is a major obstacle to be faced when scaling-up classifier systems and algorithms from small pilot applications to large full-scale applications. The Autonomous Recursive Task Decomposition (ARTD) algorithm is here proposed as a solution to the problem of many-class learning. Example applications of ARTD to neural classifier training are also presented. In these examples, improvements in training time are shown to range from 4-fold to more than 30-fold in pattern classification tasks of both static and dynamic character.

  13. Neural Networks

    Directory of Open Access Journals (Sweden)

    Schwindling Jerome

    2010-04-01

    Full Text Available This course presents an overview of the concepts of the neural networks and their aplication in the framework of High energy physics analyses. After a brief introduction on the concept of neural networks, the concept is explained in the frame of neuro-biology, introducing the concept of multi-layer perceptron, learning and their use as data classifer. The concept is then presented in a second part using in more details the mathematical approach focussing on typical use cases faced in particle physics. Finally, the last part presents the best way to use such statistical tools in view of event classifers, putting the emphasis on the setup of the multi-layer perceptron. The full article (15 p. corresponding to this lecture is written in french and is provided in the proceedings of the book SOS 2008.

  14. Assessing the effect of quantitative and qualitative predictors on gastric cancer individuals survival using hierarchical artificial neural network models.

    Science.gov (United States)

    Amiri, Zohreh; Mohammad, Kazem; Mahmoudi, Mahmood; Parsaeian, Mahbubeh; Zeraati, Hojjat

    2013-01-01

    There are numerous unanswered questions in the application of artificial neural network models for analysis of survival data. In most studies, independent variables have been studied as qualitative dichotomous variables, and results of using discrete and continuous quantitative, ordinal, or multinomial categorical predictive variables in these models are not well understood in comparison to conventional models. This study was designed and conducted to examine the application of these models in order to determine the survival of gastric cancer patients, in comparison to the Cox proportional hazards model. We studied the postoperative survival of 330 gastric cancer patients who suffered surgery at a surgical unit of the Iran Cancer Institute over a five-year period. Covariates of age, gender, history of substance abuse, cancer site, type of pathology, presence of metastasis, stage, and number of complementary treatments were entered in the models, and survival probabilities were calculated at 6, 12, 18, 24, 36, 48, and 60 months using the Cox proportional hazards and neural network models. We estimated coefficients of the Cox model and the weights in the neural network (with 3, 5, and 7 nodes in the hidden layer) in the training group, and used them to derive predictions in the study group. Predictions with these two methods were compared with those of the Kaplan-Meier product limit estimator as the gold standard. Comparisons were performed with the Friedman and Kruskal-Wallis tests. Survival probabilities at different times were determined using the Cox proportional hazards and a neural network with three nodes in the hidden layer; the ratios of standard errors with these two methods to the Kaplan-Meier method were 1.1593 and 1.0071, respectively, revealed a significant difference between Cox and Kaplan-Meier (P neural network, and the neural network and the standard (Kaplan-Meier), as well as better accuracy for the neural network (with 3 nodes in the hidden layer

  15. Detecting Hierarchical Structure in Networks

    DEFF Research Database (Denmark)

    Herlau, Tue; Mørup, Morten; Schmidt, Mikkel Nørgaard;

    2012-01-01

    a generative Bayesian model that is able to infer whether hierarchies are present or not from a hypothesis space encompassing all types of hierarchical tree structures. For efficient inference we propose a collapsed Gibbs sampling procedure that jointly infers a partition and its hierarchical structure......Many real-world networks exhibit hierarchical organization. Previous models of hierarchies within relational data has focused on binary trees; however, for many networks it is unknown whether there is hierarchical structure, and if there is, a binary tree might not account well for it. We propose....... On synthetic and real data we demonstrate that our model can detect hierarchical structure leading to better link-prediction than competing models. Our model can be used to detect if a network exhibits hierarchical structure, thereby leading to a better comprehension and statistical account the network....

  16. A neural signature of hierarchical reinforcement learning.

    Science.gov (United States)

    Ribas-Fernandes, José J F; Solway, Alec; Diuk, Carlos; McGuire, Joseph T; Barto, Andrew G; Niv, Yael; Botvinick, Matthew M

    2011-07-28

    Human behavior displays hierarchical structure: simple actions cohere into subtask sequences, which work together to accomplish overall task goals. Although the neural substrates of such hierarchy have been the target of increasing research, they remain poorly understood. We propose that the computations supporting hierarchical behavior may relate to those in hierarchical reinforcement learning (HRL), a machine-learning framework that extends reinforcement-learning mechanisms into hierarchical domains. To test this, we leveraged a distinctive prediction arising from HRL. In ordinary reinforcement learning, reward prediction errors are computed when there is an unanticipated change in the prospects for accomplishing overall task goals. HRL entails that prediction errors should also occur in relation to task subgoals. In three neuroimaging studies we observed neural responses consistent with such subgoal-related reward prediction errors, within structures previously implicated in reinforcement learning. The results reported support the relevance of HRL to the neural processes underlying hierarchical behavior.

  17. Onboard hierarchical network

    Science.gov (United States)

    Tunesi, Luca; Armbruster, Philippe

    2004-02-01

    The objective of this paper is to demonstrate a suitable hierarchical networking solution to improve capabilities and performances of space systems, with significant recurrent costs saving and more efficient design & manufacturing flows. Classically, a satellite can be split in two functional sub-systems: the platform and the payload complement. The platform is in charge of providing power, attitude & orbit control and up/down-link services, whereas the payload represents the scientific and/or operational instruments/transponders and embodies the objectives of the mission. One major possibility to improve the performance of payloads, by limiting the data return to pertinent information, is to process data on board thanks to a proper implementation of the payload data system. In this way, it is possible to share non-recurring development costs by exploiting a system that can be adopted by the majority of space missions. It is believed that the Modular and Scalable Payload Data System, under development by ESA, provides a suitable solution to fulfil a large range of future mission requirements. The backbone of the system is the standardised high data rate SpaceWire network http://www.ecss.nl/. As complement, a lower speed command and control bus connecting peripherals is required. For instance, at instrument level, there is a need for a "local" low complexity bus, which gives the possibility to command and control sensors and actuators. Moreover, most of the connections at sub-system level are related to discrete signals management or simple telemetry acquisitions, which can easily and efficiently be handled by a local bus. An on-board hierarchical network can therefore be defined by interconnecting high-speed links and local buses. Additionally, it is worth stressing another important aspect of the design process: Agencies and ESA in particular are frequently confronted with a big consortium of geographically spread companies located in different countries, each one

  18. Neural Network Applications

    NARCIS (Netherlands)

    Vonk, E.; Jain, L.C.; Veelenturf, L.P.J.

    1995-01-01

    Artificial neural networks, also called neural networks, have been used successfully in many fields including engineering, science and business. This paper presents the implementation of several neural network simulators and their applications in character recognition and other engineering areas

  19. The Physics of Neural Networks

    Science.gov (United States)

    Gutfreund, Hanoch; Toulouse, Gerard

    The following sections are included: * Introduction * Historical Perspective * Why Statistical Physics? * Purpose and Outline of the Paper * Basic Elements of Neural Network Models * The Biological Neuron * From the Biological to the Formal Neuron * The Formal Neuron * Network Architecture * Network Dynamics * Basic Functions of Neural Network Models * Associative Memory * Learning * Categorization * Generalization * Optimization * The Hopfield Model * Solution of the Model * The Merit of the Hopfield Model * Beyond the Standard Model * The Gardner Approach * A Microcanonical Formulation * The Case of Biased Patterns * A Canonical Formulation * Constraints on the Synaptic Weights * Learning with Errors * Learning with Noise * Hierarchically Correlated Data and Categorization * Hierarchical Data Structures * Storage of Hierarchical Data Structures * Categorization * Generalization * Learning a Classification Task * The Reference Perceptron Problem * The Contiguity Problem * Discussion - Issues of Relevance * The Notion of Attractors and Modes of Computation * The Nature of Attractors * Temporal versus Spatial Coding * Acknowledgements * References

  20. Morphological neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Ritter, G.X.; Sussner, P. [Univ. of Florida, Gainesville, FL (United States)

    1996-12-31

    The theory of artificial neural networks has been successfully applied to a wide variety of pattern recognition problems. In this theory, the first step in computing the next state of a neuron or in performing the next layer neural network computation involves the linear operation of multiplying neural values by their synaptic strengths and adding the results. Thresholding usually follows the linear operation in order to provide for nonlinearity of the network. In this paper we introduce a novel class of neural networks, called morphological neural networks, in which the operations of multiplication and addition are replaced by addition and maximum (or minimum), respectively. By taking the maximum (or minimum) of sums instead of the sum of products, morphological network computation is nonlinear before thresholding. As a consequence, the properties of morphological neural networks are drastically different than those of traditional neural network models. In this paper we consider some of these differences and provide some particular examples of morphological neural network.

  1. Evaluation of hierarchical structured representations for QSPR studies of small molecules and polymers by recursive neural networks.

    Science.gov (United States)

    Bertinetto, Carlo; Duce, Celia; Micheli, Alessio; Solaro, Roberto; Starita, Antonina; Tiné, Maria Rosaria

    2009-04-01

    This paper reports some recent results from the empirical evaluation of different types of structured molecular representations used in QSPR analysis through a recursive neural network (RNN) model, which allows for their direct use without the need for measuring or computing molecular descriptors. This RNN methodology has been applied to the prediction of the properties of small molecules and polymers. In particular, three different descriptions of cyclic moieties, namely group, template and cyclebreak have been proposed. The effectiveness of the proposed method in dealing with different representations of chemical structures, either specifically designed or of more general use, has been demonstrated by its application to data sets encompassing various types of cyclic structures. For each class of experiments a test set with data that were not used for the development of the model was used for validation, and the comparisons have been based on the test results. The reported results highlight the flexibility of the RNN in directly treating different classes of structured input data without using input descriptors.

  2. Constructive neural network learning

    OpenAIRE

    Lin, Shaobo; Zeng, Jinshan; Zhang, Xiaoqin

    2016-01-01

    In this paper, we aim at developing scalable neural network-type learning systems. Motivated by the idea of "constructive neural networks" in approximation theory, we focus on "constructing" rather than "training" feed-forward neural networks (FNNs) for learning, and propose a novel FNNs learning system called the constructive feed-forward neural network (CFN). Theoretically, we prove that the proposed method not only overcomes the classical saturation problem for FNN approximation, but also ...

  3. Generalized classifier neural network.

    Science.gov (United States)

    Ozyildirim, Buse Melis; Avci, Mutlu

    2013-03-01

    In this work a new radial basis function based classification neural network named as generalized classifier neural network, is proposed. The proposed generalized classifier neural network has five layers, unlike other radial basis function based neural networks such as generalized regression neural network and probabilistic neural network. They are input, pattern, summation, normalization and output layers. In addition to topological difference, the proposed neural network has gradient descent based optimization of smoothing parameter approach and diverge effect term added calculation improvements. Diverge effect term is an improvement on summation layer calculation to supply additional separation ability and flexibility. Performance of generalized classifier neural network is compared with that of the probabilistic neural network, multilayer perceptron algorithm and radial basis function neural network on 9 different data sets and with that of generalized regression neural network on 3 different data sets include only two classes in MATLAB environment. Better classification performance up to %89 is observed. Improved classification performances proved the effectivity of the proposed neural network.

  4. Hierarchical Neural Regression Models for Customer Churn Prediction

    Directory of Open Access Journals (Sweden)

    Golshan Mohammadi

    2013-01-01

    Full Text Available As customers are the main assets of each industry, customer churn prediction is becoming a major task for companies to remain in competition with competitors. In the literature, the better applicability and efficiency of hierarchical data mining techniques has been reported. This paper considers three hierarchical models by combining four different data mining techniques for churn prediction, which are backpropagation artificial neural networks (ANN, self-organizing maps (SOM, alpha-cut fuzzy c-means (α-FCM, and Cox proportional hazards regression model. The hierarchical models are ANN + ANN + Cox, SOM + ANN + Cox, and α-FCM + ANN + Cox. In particular, the first component of the models aims to cluster data in two churner and nonchurner groups and also filter out unrepresentative data or outliers. Then, the clustered data as the outputs are used to assign customers to churner and nonchurner groups by the second technique. Finally, the correctly classified data are used to create Cox proportional hazards model. To evaluate the performance of the hierarchical models, an Iranian mobile dataset is considered. The experimental results show that the hierarchical models outperform the single Cox regression baseline model in terms of prediction accuracy, Types I and II errors, RMSE, and MAD metrics. In addition, the α-FCM + ANN + Cox model significantly performs better than the two other hierarchical models.

  5. The hierarchical brain network for face recognition.

    Science.gov (United States)

    Zhen, Zonglei; Fang, Huizhen; Liu, Jia

    2013-01-01

    Numerous functional magnetic resonance imaging (fMRI) studies have identified multiple cortical regions that are involved in face processing in the human brain. However, few studies have characterized the face-processing network as a functioning whole. In this study, we used fMRI to identify face-selective regions in the entire brain and then explore the hierarchical structure of the face-processing network by analyzing functional connectivity among these regions. We identified twenty-five regions mainly in the occipital, temporal and frontal cortex that showed a reliable response selective to faces (versus objects) across participants and across scan sessions. Furthermore, these regions were clustered into three relatively independent sub-networks in a face-recognition task on the basis of the strength of functional connectivity among them. The functionality of the sub-networks likely corresponds to the recognition of individual identity, retrieval of semantic knowledge and representation of emotional information. Interestingly, when the task was switched to object recognition from face recognition, the functional connectivity between the inferior occipital gyrus and the rest of the face-selective regions were significantly reduced, suggesting that this region may serve as an entry node in the face-processing network. In sum, our study provides empirical evidence for cognitive and neural models of face recognition and helps elucidate the neural mechanisms underlying face recognition at the network level.

  6. Combining neural networks for protein secondary structure prediction

    DEFF Research Database (Denmark)

    Riis, Søren Kamaric

    1995-01-01

    In this paper structured neural networks are applied to the problem of predicting the secondary structure of proteins. A hierarchical approach is used where specialized neural networks are designed for each structural class and then combined using another neural network. The submodels are designe...... is better than most secondary structure prediction methods based on single sequences even though this model contains much fewer parameters...

  7. Chaotic diagonal recurrent neural network

    Institute of Scientific and Technical Information of China (English)

    Wang Xing-Yuan; Zhang Yi

    2012-01-01

    We propose a novel neural network based on a diagonal recurrent neural network and chaos,and its structure andlearning algorithm are designed.The multilayer feedforward neural network,diagonal recurrent neural network,and chaotic diagonal recurrent neural network are used to approach the cubic symmetry map.The simulation results show that the approximation capability of the chaotic diagonal recurrent neural network is better than the other two neural networks.

  8. Artificial Neural Networks

    OpenAIRE

    Chung-Ming Kuan

    2006-01-01

    Artificial neural networks (ANNs) constitute a class of flexible nonlinear models designed to mimic biological neural systems. In this entry, we introduce ANN using familiar econometric terminology and provide an overview of ANN modeling approach and its implementation methods.

  9. Hierarchical networks of scientific journals

    CERN Document Server

    Palla, Gergely; Mones, Enys; Pollner, Péter; Vicsek, Tamás

    2015-01-01

    Scientific journals are the repositories of the gradually accumulating knowledge of mankind about the world surrounding us. Just as our knowledge is organised into classes ranging from major disciplines, subjects and fields to increasingly specific topics, journals can also be categorised into groups using various metrics. In addition to the set of topics characteristic for a journal, they can also be ranked regarding their relevance from the point of overall influence. One widespread measure is impact factor, but in the present paper we intend to reconstruct a much more detailed description by studying the hierarchical relations between the journals based on citation data. We use a measure related to the notion of m-reaching centrality and find a network which shows the level of influence of a journal from the point of the direction and efficiency with which information spreads through the network. We can also obtain an alternative network using a suitably modified nested hierarchy extraction method applied ...

  10. Neural Networks: Implementations and Applications

    NARCIS (Netherlands)

    Vonk, E.; Veelenturf, L.P.J.; Jain, L.C.

    1996-01-01

    Artificial neural networks, also called neural networks, have been used successfully in many fields including engineering, science and business. This paper presents the implementation of several neural network simulators and their applications in character recognition and other engineering areas

  11. Neural Networks: Implementations and Applications

    NARCIS (Netherlands)

    Vonk, E.; Veelenturf, L.P.J.; Jain, L.C.

    1996-01-01

    Artificial neural networks, also called neural networks, have been used successfully in many fields including engineering, science and business. This paper presents the implementation of several neural network simulators and their applications in character recognition and other engineering areas

  12. Hierarchical Network Design Using Simulated Annealing

    DEFF Research Database (Denmark)

    Thomadsen, Tommy; Clausen, Jens

    2002-01-01

    The hierarchical network problem is the problem of finding the least cost network, with nodes divided into groups, edges connecting nodes in each groups and groups ordered in a hierarchy. The idea of hierarchical networks comes from telecommunication networks where hierarchies exist. Hierarchical...... networks are described and a mathematical model is proposed for a two level version of the hierarchical network problem. The problem is to determine which edges should connect nodes, and how demand is routed in the network. The problem is solved heuristically using simulated annealing which as a sub......-algorithm uses a construction algorithm to determine edges and route the demand. Performance for different versions of the algorithm are reported in terms of runtime and quality of the solutions. The algorithm is able to find solutions of reasonable quality in approximately 1 hour for networks with 100 nodes....

  13. Hidden neural networks

    DEFF Research Database (Denmark)

    Krogh, Anders Stærmose; Riis, Søren Kamaric

    1999-01-01

    A general framework for hybrids of hidden Markov models (HMMs) and neural networks (NNs) called hidden neural networks (HNNs) is described. The article begins by reviewing standard HMMs and estimation by conditional maximum likelihood, which is used by the HNN. In the HNN, the usual HMM probability...... parameters are replaced by the outputs of state-specific neural networks. As opposed to many other hybrids, the HNN is normalized globally and therefore has a valid probabilistic interpretation. All parameters in the HNN are estimated simultaneously according to the discriminative conditional maximum...... likelihood criterion. The HNN can be viewed as an undirected probabilistic independence network (a graphical model), where the neural networks provide a compact representation of the clique functions. An evaluation of the HNN on the task of recognizing broad phoneme classes in the TIMIT database shows clear...

  14. Neural Network Ensembles

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Salamon, Peter

    1990-01-01

    We propose several means for improving the performance an training of neural networks for classification. We use crossvalidation as a tool for optimizing network parameters and architecture. We show further that the remaining generalization error can be reduced by invoking ensembles of similar...... networks....

  15. Hierarchical modularity in human brain functional networks

    CERN Document Server

    Meunier, D; Fornito, A; Ersche, K D; Bullmore, E T; 10.3389/neuro.11.037.2009

    2010-01-01

    The idea that complex systems have a hierarchical modular organization originates in the early 1960s and has recently attracted fresh support from quantitative studies of large scale, real-life networks. Here we investigate the hierarchical modular (or "modules-within-modules") decomposition of human brain functional networks, measured using functional magnetic resonance imaging (fMRI) in 18 healthy volunteers under no-task or resting conditions. We used a customized template to extract networks with more than 1800 regional nodes, and we applied a fast algorithm to identify nested modular structure at several hierarchical levels. We used mutual information, 0 < I < 1, to estimate the similarity of community structure of networks in different subjects, and to identify the individual network that is most representative of the group. Results show that human brain functional networks have a hierarchical modular organization with a fair degree of similarity between subjects, I=0.63. The largest 5 modules at ...

  16. Critical Branching Neural Networks

    Science.gov (United States)

    Kello, Christopher T.

    2013-01-01

    It is now well-established that intrinsic variations in human neural and behavioral activity tend to exhibit scaling laws in their fluctuations and distributions. The meaning of these scaling laws is an ongoing matter of debate between isolable causes versus pervasive causes. A spiking neural network model is presented that self-tunes to critical…

  17. Critical Branching Neural Networks

    Science.gov (United States)

    Kello, Christopher T.

    2013-01-01

    It is now well-established that intrinsic variations in human neural and behavioral activity tend to exhibit scaling laws in their fluctuations and distributions. The meaning of these scaling laws is an ongoing matter of debate between isolable causes versus pervasive causes. A spiking neural network model is presented that self-tunes to critical…

  18. Analyzing security protocols in hierarchical networks

    DEFF Research Database (Denmark)

    Zhang, Ye; Nielson, Hanne Riis

    2006-01-01

    Validating security protocols is a well-known hard problem even in a simple setting of a single global network. But a real network often consists of, besides the public-accessed part, several sub-networks and thereby forms a hierarchical structure. In this paper we first present a process calculus...... capturing the characteristics of hierarchical networks and describe the behavior of protocols on such networks. We then develop a static analysis to automate the validation. Finally we demonstrate how the technique can benefit the protocol development and the design of network systems by presenting a series...

  19. Neural networks and graph theory

    Institute of Scientific and Technical Information of China (English)

    许进; 保铮

    2002-01-01

    The relationships between artificial neural networks and graph theory are considered in detail. The applications of artificial neural networks to many difficult problems of graph theory, especially NP-complete problems, and the applications of graph theory to artificial neural networks are discussed. For example graph theory is used to study the pattern classification problem on the discrete type feedforward neural networks, and the stability analysis of feedback artificial neural networks etc.

  20. Object recognition with hierarchical discriminant saliency networks.

    Science.gov (United States)

    Han, Sunhyoung; Vasconcelos, Nuno

    2014-01-01

    The benefits of integrating attention and object recognition are investigated. While attention is frequently modeled as a pre-processor for recognition, we investigate the hypothesis that attention is an intrinsic component of recognition and vice-versa. This hypothesis is tested with a recognition model, the hierarchical discriminant saliency network (HDSN), whose layers are top-down saliency detectors, tuned for a visual class according to the principles of discriminant saliency. As a model of neural computation, the HDSN has two possible implementations. In a biologically plausible implementation, all layers comply with the standard neurophysiological model of visual cortex, with sub-layers of simple and complex units that implement a combination of filtering, divisive normalization, pooling, and non-linearities. In a convolutional neural network implementation, all layers are convolutional and implement a combination of filtering, rectification, and pooling. The rectification is performed with a parametric extension of the now popular rectified linear units (ReLUs), whose parameters can be tuned for the detection of target object classes. This enables a number of functional enhancements over neural network models that lack a connection to saliency, including optimal feature denoising mechanisms for recognition, modulation of saliency responses by the discriminant power of the underlying features, and the ability to detect both feature presence and absence. In either implementation, each layer has a precise statistical interpretation, and all parameters are tuned by statistical learning. Each saliency detection layer learns more discriminant saliency templates than its predecessors and higher layers have larger pooling fields. This enables the HDSN to simultaneously achieve high selectivity to target object classes and invariance. The performance of the network in saliency and object recognition tasks is compared to those of models from the biological and

  1. Neural networks in seismic discrimination

    Energy Technology Data Exchange (ETDEWEB)

    Dowla, F.U.

    1995-01-01

    Neural networks are powerful and elegant computational tools that can be used in the analysis of geophysical signals. At Lawrence Livermore National Laboratory, we have developed neural networks to solve problems in seismic discrimination, event classification, and seismic and hydrodynamic yield estimation. Other researchers have used neural networks for seismic phase identification. We are currently developing neural networks to estimate depths of seismic events using regional seismograms. In this paper different types of network architecture and representation techniques are discussed. We address the important problem of designing neural networks with good generalization capabilities. Examples of neural networks for treaty verification applications are also described.

  2. Universal hierarchical behavior of citation networks

    CERN Document Server

    Mones, Enys; Vicsek, Tamás

    2014-01-01

    Many of the essential features of the evolution of scientific research are imprinted in the structure of citation networks. Connections in these networks imply information about the transfer of knowledge among papers, or in other words, edges describe the impact of papers on other publications. This inherent meaning of the edges infers that citation networks can exhibit hierarchical features, that is typical of networks based on decision-making. In this paper, we investigate the hierarchical structure of citation networks consisting of papers in the same field. We find that the majority of the networks follow a universal trend towards a highly hierarchical state, and i) the various fields display differences only concerning their phase in life (distance from the "birth" of a field) or ii) the characteristic time according to which they are approaching the stationary state. We also show by a simple argument that the alterations in the behavior are related to and can be understood by the degree of specializatio...

  3. Genetic Algorithm for Hierarchical Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Sajid Hussain

    2007-09-01

    Full Text Available Large scale wireless sensor networks (WSNs can be used for various pervasive and ubiquitous applications such as security, health-care, industry automation, agriculture, environment and habitat monitoring. As hierarchical clusters can reduce the energy consumption requirements for WSNs, we investigate intelligent techniques for cluster formation and management. A genetic algorithm (GA is used to create energy efficient clusters for data dissemination in wireless sensor networks. The simulation results show that the proposed intelligent hierarchical clustering technique can extend the network lifetime for different network deployment environments.

  4. Fuzzy Multiresolution Neural Networks

    Science.gov (United States)

    Ying, Li; Qigang, Shang; Na, Lei

    A fuzzy multi-resolution neural network (FMRANN) based on particle swarm algorithm is proposed to approximate arbitrary nonlinear function. The active function of the FMRANN consists of not only the wavelet functions, but also the scaling functions, whose translation parameters and dilation parameters are adjustable. A set of fuzzy rules are involved in the FMRANN. Each rule either corresponding to a subset consists of scaling functions, or corresponding to a sub-wavelet neural network consists of wavelets with same dilation parameters. Incorporating the time-frequency localization and multi-resolution properties of wavelets with the ability of self-learning of fuzzy neural network, the approximation ability of FMRANN can be remarkable improved. A particle swarm algorithm is adopted to learn the translation and dilation parameters of the wavelets and adjusting the shape of membership functions. Simulation examples are presented to validate the effectiveness of FMRANN.

  5. Rule Extraction:Using Neural Networks or for Neural Networks?

    Institute of Scientific and Technical Information of China (English)

    Zhi-Hua Zhou

    2004-01-01

    In the research of rule extraction from neural networks, fidelity describes how well the rules mimic the behavior of a neural network while accuracy describes how well the rules can be generalized. This paper identifies the fidelity-accuracy dilemma. It argues to distinguish rule extraction using neural networks and rule extraction for neural networks according to their different goals, where fidelity and accuracy should be excluded from the rule quality evaluation framework, respectively.

  6. Hierarchical social networks and information flow

    Science.gov (United States)

    López, Luis; F. F. Mendes, Jose; Sanjuán, Miguel A. F.

    2002-12-01

    Using a simple model for the information flow on social networks, we show that the traditional hierarchical topologies frequently used by companies and organizations, are poorly designed in terms of efficiency. Moreover, we prove that this type of structures are the result of the individual aim of monopolizing as much information as possible within the network. As the information is an appropriate measurement of centrality, we conclude that this kind of topology is so attractive for leaders, because the global influence each actor has within the network is completely determined by the hierarchical level occupied.

  7. Introduction to Artificial Neural Networks

    DEFF Research Database (Denmark)

    Larsen, Jan

    1999-01-01

    The note addresses introduction to signal analysis and classification based on artificial feed-forward neural networks.......The note addresses introduction to signal analysis and classification based on artificial feed-forward neural networks....

  8. Human Face Recognition Using Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Răzvan-Daniel Albu

    2009-10-01

    Full Text Available In this paper, I present a novel hybrid face recognition approach based on a convolutional neural architecture, designed to robustly detect highly variable face patterns. The convolutional network extracts successively larger features in a hierarchical set of layers. With the weights of the trained neural networks there are created kernel windows used for feature extraction in a 3-stage algorithm. I present experimental results illustrating the efficiency of the proposed approach. I use a database of 796 images of 159 individuals from Reims University which contains quite a high degree of variability in expression, pose, and facial details.

  9. Strategic games on a hierarchical network model

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Among complex network models, the hierarchical network model is the one most close to such real networks as world trade web, metabolic network, WWW, actor network, and so on. It has not only the property of power-law degree distribution, but growth based on growth and preferential attachment, showing the scale-free degree distribution property. In this paper, we study the evolution of cooperation on a hierarchical network model, adopting the prisoner's dilemma (PD) game and snowdrift game (SG) as metaphors of the interplay between connected nodes. BA model provides a unifying framework for the emergence of cooperation. But interestingly, we found that on hierarchical model, there is no sign of cooperation for PD game, while the frequency of cooperation decreases as the common benefit decreases for SG. By comparing the scaling clustering coefficient properties of the hierarchical network model with that of BA model, we found that the former amplifies the effect of hubs. Considering different performances of PD game and SG on complex network, we also found that common benefit leads to cooperation in the evolution. Thus our study may shed light on the emergence of cooperation in both natural and social environments.

  10. Ultrafast Hierarchical OTDM/WDM Network

    Directory of Open Access Journals (Sweden)

    Hideyuki Sotobayashi

    2003-12-01

    Full Text Available Ultrafast hierarchical OTDM/WDM network is proposed for the future core-network. We review its enabling technologies: C- and L-wavelength-band generation, OTDM-WDM mutual multiplexing format conversions, and ultrafast OTDM wavelengthband conversions.

  11. Compressing Convolutional Neural Networks

    OpenAIRE

    Chen, Wenlin; Wilson, James T.; Tyree, Stephen; Weinberger, Kilian Q.; Chen, Yixin

    2015-01-01

    Convolutional neural networks (CNN) are increasingly used in many areas of computer vision. They are particularly attractive because of their ability to "absorb" great quantities of labeled data through millions of parameters. However, as model sizes increase, so do the storage and memory requirements of the classifiers. We present a novel network architecture, Frequency-Sensitive Hashed Nets (FreshNets), which exploits inherent redundancy in both convolutional layers and fully-connected laye...

  12. Artificial neural network modelling

    CERN Document Server

    Samarasinghe, Sandhya

    2016-01-01

    This book covers theoretical aspects as well as recent innovative applications of Artificial Neural networks (ANNs) in natural, environmental, biological, social, industrial and automated systems. It presents recent results of ANNs in modelling small, large and complex systems under three categories, namely, 1) Networks, Structure Optimisation, Robustness and Stochasticity 2) Advances in Modelling Biological and Environmental Systems and 3) Advances in Modelling Social and Economic Systems. The book aims at serving undergraduates, postgraduates and researchers in ANN computational modelling. .

  13. Critical branching neural networks.

    Science.gov (United States)

    Kello, Christopher T

    2013-01-01

    It is now well-established that intrinsic variations in human neural and behavioral activity tend to exhibit scaling laws in their fluctuations and distributions. The meaning of these scaling laws is an ongoing matter of debate between isolable causes versus pervasive causes. A spiking neural network model is presented that self-tunes to critical branching and, in doing so, simulates observed scaling laws as pervasive to neural and behavioral activity. These scaling laws are related to neural and cognitive functions, in that critical branching is shown to yield spiking activity with maximal memory and encoding capacities when analyzed using reservoir computing techniques. The model is also shown to account for findings of pervasive 1/f scaling in speech and cued response behaviors that are difficult to explain by isolable causes. Issues and questions raised by the model and its results are discussed from the perspectives of physics, neuroscience, computer and information sciences, and psychological and cognitive sciences.

  14. Generalized Adaptive Artificial Neural Networks

    Science.gov (United States)

    Tawel, Raoul

    1993-01-01

    Mathematical model of supervised learning by artificial neural network provides for simultaneous adjustments of both temperatures of neurons and synaptic weights, and includes feedback as well as feedforward synaptic connections. Extension of mathematical model described in "Adaptive Neurons For Artificial Neural Networks" (NPO-17803). Dynamics of neural network represented in new model by less-restrictive continuous formalism.

  15. Noise enhances information transfer in hierarchical networks.

    Science.gov (United States)

    Czaplicka, Agnieszka; Holyst, Janusz A; Sloot, Peter M A

    2013-01-01

    We study the influence of noise on information transmission in the form of packages shipped between nodes of hierarchical networks. Numerical simulations are performed for artificial tree networks, scale-free Ravasz-Barabási networks as well for a real network formed by email addresses of former Enron employees. Two types of noise are considered. One is related to packet dynamics and is responsible for a random part of packets paths. The second one originates from random changes in initial network topology. We find that the information transfer can be enhanced by the noise. The system possesses optimal performance when both kinds of noise are tuned to specific values, this corresponds to the Stochastic Resonance phenomenon. There is a non-trivial synergy present for both noisy components. We found also that hierarchical networks built of nodes of various degrees are more efficient in information transfer than trees with a fixed branching factor.

  16. Biased trapping issue on weighted hierarchical networks

    Indian Academy of Sciences (India)

    Meifeng Dai; Jie Liu; Feng Zhu

    2014-10-01

    In this paper, we present trapping issues of weight-dependent walks on weighted hierarchical networks which are based on the classic scale-free hierarchical networks. Assuming that edge’s weight is used as local information by a random walker, we introduce a biased walk. The biased walk is that a walker, at each step, chooses one of its neighbours with a probability proportional to the weight of the edge. We focus on a particular case with the immobile trap positioned at the hub node which has the largest degree in the weighted hierarchical networks. Using a method based on generating functions, we determine explicitly the mean first-passage time (MFPT) for the trapping issue. Let parameter (0 < < 1) be the weight factor. We show that the efficiency of the trapping process depends on the parameter a; the smaller the value of a, the more efficient is the trapping process.

  17. Object recognition with hierarchical discriminant saliency networks

    Directory of Open Access Journals (Sweden)

    Sunhyoung eHan

    2014-09-01

    Full Text Available The benefits of integrating attention and object recognition are investigated. While attention is frequently modeled as pre-processor for recognition, we investigate the hypothesis that attention is an intrinsic component of recognition and vice-versa. This hypothesis is tested with a recognitionmodel, the hierarchical discriminant saliency network (HDSN, whose layers are top-down saliency detectors, tuned for a visual class according to the principles of discriminant saliency. The HDSN has two possible implementations. In a biologically plausible implementation, all layers comply with the standard neurophysiological model of visual cortex, with sub-layers of simple and complex units that implement a combination of filtering, divisive normalization, pooling, and non-linearities. In a neuralnetwork implementation, all layers are convolutional and implement acombination of filtering, rectification, and pooling. The rectificationis performed with a parametric extension of the now popular rectified linearunits (ReLUs, whose parameters can be tuned for the detection of targetobject classes. This enables a number of functional enhancementsover neural network models that lack a connection to saliency, including optimal feature denoising mechanisms for recognition, modulation ofsaliency responses by the discriminant power of the underlying features,and the ability to detect both feature presence and absence.In either implementation, each layer has a precise statistical interpretation, and all parameters are tuned by statistical learning. Each saliency detection layer learns more discriminant saliency templates than its predecessors and higher layers have larger pooling fields. This enables the HDSN to simultaneously achieve high selectivity totarget object classes and invariance. The resulting performance demonstrates benefits for all the functional enhancements of the HDSN.

  18. Quantum Neural Networks

    CERN Document Server

    Gupta, S; Gupta, Sanjay

    2002-01-01

    This paper initiates the study of quantum computing within the constraints of using a polylogarithmic ($O(\\log^k n), k\\geq 1$) number of qubits and a polylogarithmic number of computation steps. The current research in the literature has focussed on using a polynomial number of qubits. A new mathematical model of computation called \\emph{Quantum Neural Networks (QNNs)} is defined, building on Deutsch's model of quantum computational network. The model introduces a nonlinear and irreversible gate, similar to the speculative operator defined by Abrams and Lloyd. The precise dynamics of this operator are defined and while giving examples in which nonlinear Schr\\"{o}dinger's equations are applied, we speculate on its possible implementation. The many practical problems associated with the current model of quantum computing are alleviated in the new model. It is shown that QNNs of logarithmic size and constant depth have the same computational power as threshold circuits, which are used for modeling neural network...

  19. Interval probabilistic neural network.

    Science.gov (United States)

    Kowalski, Piotr A; Kulczycki, Piotr

    2017-01-01

    Automated classification systems have allowed for the rapid development of exploratory data analysis. Such systems increase the independence of human intervention in obtaining the analysis results, especially when inaccurate information is under consideration. The aim of this paper is to present a novel approach, a neural networking, for use in classifying interval information. As presented, neural methodology is a generalization of probabilistic neural network for interval data processing. The simple structure of this neural classification algorithm makes it applicable for research purposes. The procedure is based on the Bayes approach, ensuring minimal potential losses with regard to that which comes about through classification errors. In this article, the topological structure of the network and the learning process are described in detail. Of note, the correctness of the procedure proposed here has been verified by way of numerical tests. These tests include examples of both synthetic data, as well as benchmark instances. The results of numerical verification, carried out for different shapes of data sets, as well as a comparative analysis with other methods of similar conditioning, have validated both the concept presented here and its positive features.

  20. Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Kapil Nahar

    2012-12-01

    Full Text Available An artificial neural network is an information-processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information.The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurons working in unison to solve specific problems.Ann’s, like people, learn by example.

  1. Neural networks for triggering

    Energy Technology Data Exchange (ETDEWEB)

    Denby, B. (Fermi National Accelerator Lab., Batavia, IL (USA)); Campbell, M. (Michigan Univ., Ann Arbor, MI (USA)); Bedeschi, F. (Istituto Nazionale di Fisica Nucleare, Pisa (Italy)); Chriss, N.; Bowers, C. (Chicago Univ., IL (USA)); Nesti, F. (Scuola Normale Superiore, Pisa (Italy))

    1990-01-01

    Two types of neural network beauty trigger architectures, based on identification of electrons in jets and recognition of secondary vertices, have been simulated in the environment of the Fermilab CDF experiment. The efficiencies for B's and rejection of background obtained are encouraging. If hardware tests are successful, the electron identification architecture will be tested in the 1991 run of CDF. 10 refs., 5 figs., 1 tab.

  2. Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Kapil Nahar

    2012-12-01

    Full Text Available An artificial neural network is an information-processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurons working in unison to solve specific problems. Ann’s, like people, learn by example.

  3. Non-homogeneous fractal hierarchical weighted networks.

    Science.gov (United States)

    Dong, Yujuan; Dai, Meifeng; Ye, Dandan

    2015-01-01

    A model of fractal hierarchical structures that share the property of non-homogeneous weighted networks is introduced. These networks can be completely and analytically characterized in terms of the involved parameters, i.e., the size of the original graph Nk and the non-homogeneous weight scaling factors r1, r2, · · · rM. We also study the average weighted shortest path (AWSP), the average degree and the average node strength, taking place on the non-homogeneous hierarchical weighted networks. Moreover the AWSP is scrupulously calculated. We show that the AWSP depends on the number of copies and the sum of all non-homogeneous weight scaling factors in the infinite network order limit.

  4. Hierarchical community structure in complex (social) networks

    CERN Document Server

    Massaro, Emanuele

    2014-01-01

    The investigation of community structure in networks is a task of great importance in many disciplines, namely physics, sociology, biology and computer science where systems are often represented as graphs. One of the challenges is to find local communities from a local viewpoint in a graph without global information in order to reproduce the subjective hierarchical vision for each vertex. In this paper we present the improvement of an information dynamics algorithm in which the label propagation of nodes is based on the Markovian flow of information in the network under cognitive-inspired constraints \\cite{Massaro2012}. In this framework we have introduced two more complex heuristics that allow the algorithm to detect the multi-resolution hierarchical community structure of networks from a source vertex or communities adopting fixed values of model's parameters. Experimental results show that the proposed methods are efficient and well-behaved in both real-world and synthetic networks.

  5. VOLTAGE COMPENSATION USING ARTIFICIAL NEURAL NETWORK

    African Journals Online (AJOL)

    VOLTAGE COMPENSATION USING ARTIFICIAL NEURAL NETWORK: A CASE STUDY OF RUMUOLA DISTRIBUTION NETWORK. ... The artificial neural networks controller engaged to controlling the dynamic voltage ... Article Metrics.

  6. The Hourglass Effect in Hierarchical Dependency Networks

    CERN Document Server

    Sabrin, Kaeser M

    2016-01-01

    Many hierarchically modular systems are structured in a way that resembles a bow-tie or hourglass. This "hourglass effect" means that the system generates many outputs from many inputs through a relatively small number of intermediate modules that are critical for the operation of the entire system (the waist of the hourglass). We investigate the hourglass effect in general (not necessarily layered) hierarchical dependency networks. Our analysis focuses on the number of source-to-target dependency paths that traverse each vertex, and it identifies the core of a dependency network as the smallest set of vertices that collectively cover almost all dependency paths. We then examine if a given network exhibits the hourglass property or not, comparing its core size with a "flat" (i.e., non-hierarchical) network that preserves the source dependencies of each target in the original network. As a possible explanation for the hourglass effect, we propose the Reuse Preference (RP) model that captures the bias of new mo...

  7. Synchronization patterns: from network motifs to hierarchical networks

    Science.gov (United States)

    Krishnagopal, Sanjukta; Lehnert, Judith; Poel, Winnie; Zakharova, Anna; Schöll, Eckehard

    2017-03-01

    We investigate complex synchronization patterns such as cluster synchronization and partial amplitude death in networks of coupled Stuart-Landau oscillators with fractal connectivities. The study of fractal or self-similar topology is motivated by the network of neurons in the brain. This fractal property is well represented in hierarchical networks, for which we present three different models. In addition, we introduce an analytical eigensolution method and provide a comprehensive picture of the interplay of network topology and the corresponding network dynamics, thus allowing us to predict the dynamics of arbitrarily large hierarchical networks simply by analysing small network motifs. We also show that oscillation death can be induced in these networks, even if the coupling is symmetric, contrary to previous understanding of oscillation death. Our results show that there is a direct correlation between topology and dynamics: hierarchical networks exhibit the corresponding hierarchical dynamics. This helps bridge the gap between mesoscale motifs and macroscopic networks. This article is part of the themed issue 'Horizons of cybernetical physics'.

  8. Trimaran Resistance Artificial Neural Network

    Science.gov (United States)

    2011-01-01

    11th International Conference on Fast Sea Transportation FAST 2011, Honolulu, Hawaii, USA, September 2011 Trimaran Resistance Artificial Neural Network Richard...Trimaran Resistance Artificial Neural Network 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e... Artificial Neural Network and is restricted to the center and side-hull configurations tested. The value in the parametric model is that it is able to

  9. [Artificial neural networks in Neurosciences].

    Science.gov (United States)

    Porras Chavarino, Carmen; Salinas Martínez de Lecea, José María

    2011-11-01

    This article shows that artificial neural networks are used for confirming the relationships between physiological and cognitive changes. Specifically, we explore the influence of a decrease of neurotransmitters on the behaviour of old people in recognition tasks. This artificial neural network recognizes learned patterns. When we change the threshold of activation in some units, the artificial neural network simulates the experimental results of old people in recognition tasks. However, the main contributions of this paper are the design of an artificial neural network and its operation inspired by the nervous system and the way the inputs are coded and the process of orthogonalization of patterns.

  10. via dynamic neural networks

    Directory of Open Access Journals (Sweden)

    J. Reyes-Reyes

    2000-01-01

    Full Text Available In this paper, an adaptive technique is suggested to provide the passivity property for a class of partially known SISO nonlinear systems. A simple Dynamic Neural Network (DNN, containing only two neurons and without any hidden-layers, is used to identify the unknown nonlinear system. By means of a Lyapunov-like analysis the new learning law for this DNN, guarantying both successful identification and passivation effects, is derived. Based on this adaptive DNN model, an adaptive feedback controller, serving for wide class of nonlinear systems with an a priori incomplete model description, is designed. Two typical examples illustrate the effectiveness of the suggested approach.

  11. First-passage phenomena in hierarchical networks

    CERN Document Server

    Tavani, Flavia

    2016-01-01

    In this paper we study Markov processes and related first passage problems on a class of weighted, modular graphs which generalize the Dyson hierarchical model. In these networks, the coupling strength between two nodes depends on their distance and is modulated by a parameter $\\sigma$. We find that, in the thermodynamic limit, ergodicity is lost and the "distant" nodes can not be reached. Moreover, for finite-sized systems, there exists a threshold value for $\\sigma$ such that, when $\\sigma$ is relatively large, the inhomogeneity of the coupling pattern prevails and "distant" nodes are hardly reached. The same analysis is carried on also for generic hierarchical graphs, where interactions are meant to involve $p$-plets ($p>2$) of nodes, finding that ergodicity is still broken in the thermodynamic limit, but no threshold value for $\\sigma$ is evidenced, ultimately due to a slow growth of the network diameter with the size.

  12. Analysis of neural networks

    CERN Document Server

    Heiden, Uwe

    1980-01-01

    The purpose of this work is a unified and general treatment of activity in neural networks from a mathematical pOint of view. Possible applications of the theory presented are indica­ ted throughout the text. However, they are not explored in de­ tail for two reasons : first, the universal character of n- ral activity in nearly all animals requires some type of a general approach~ secondly, the mathematical perspicuity would suffer if too many experimental details and empirical peculiarities were interspersed among the mathematical investigation. A guide to many applications is supplied by the references concerning a variety of specific issues. Of course the theory does not aim at covering all individual problems. Moreover there are other approaches to neural network theory (see e.g. Poggio-Torre, 1978) based on the different lev­ els at which the nervous system may be viewed. The theory is a deterministic one reflecting the average be­ havior of neurons or neuron pools. In this respect the essay is writt...

  13. Neural Networks for Optimal Control

    DEFF Research Database (Denmark)

    Sørensen, O.

    1995-01-01

    Two neural networks are trained to act as an observer and a controller, respectively, to control a non-linear, multi-variable process.......Two neural networks are trained to act as an observer and a controller, respectively, to control a non-linear, multi-variable process....

  14. Neural Networks for Optimal Control

    DEFF Research Database (Denmark)

    Sørensen, O.

    1995-01-01

    Two neural networks are trained to act as an observer and a controller, respectively, to control a non-linear, multi-variable process.......Two neural networks are trained to act as an observer and a controller, respectively, to control a non-linear, multi-variable process....

  15. Perspective: network-guided pattern formation of neural dynamics

    OpenAIRE

    Hütt, Marc-Thorsten; Kaiser, Marcus; Claus C Hilgetag

    2014-01-01

    The understanding of neural activity patterns is fundamentally linked to an understanding of how the brain's network architecture shapes dynamical processes. Established approaches rely mostly on deviations of a given network from certain classes of random graphs. Hypotheses about the supposed role of prominent topological features (for instance, the roles of modularity, network motifs or hierarchical network organization) are derived from these deviations. An alternative strategy could be to...

  16. Neural networks in astronomy.

    Science.gov (United States)

    Tagliaferri, Roberto; Longo, Giuseppe; Milano, Leopoldo; Acernese, Fausto; Barone, Fabrizio; Ciaramella, Angelo; De Rosa, Rosario; Donalek, Ciro; Eleuteri, Antonio; Raiconi, Giancarlo; Sessa, Salvatore; Staiano, Antonino; Volpicelli, Alfredo

    2003-01-01

    In the last decade, the use of neural networks (NN) and of other soft computing methods has begun to spread also in the astronomical community which, due to the required accuracy of the measurements, is usually reluctant to use automatic tools to perform even the most common tasks of data reduction and data mining. The federation of heterogeneous large astronomical databases which is foreseen in the framework of the astrophysical virtual observatory and national virtual observatory projects, is, however, posing unprecedented data mining and visualization problems which will find a rather natural and user friendly answer in artificial intelligence tools based on NNs, fuzzy sets or genetic algorithms. This review is aimed to both astronomers (who often have little knowledge of the methodological background) and computer scientists (who often know little about potentially interesting applications), and therefore will be structured as follows: after giving a short introduction to the subject, we shall summarize the methodological background and focus our attention on some of the most interesting fields of application, namely: object extraction and classification, time series analysis, noise identification, and data mining. Most of the original work described in the paper has been performed in the framework of the AstroNeural collaboration (Napoli-Salerno).

  17. Logic Mining Using Neural Networks

    CERN Document Server

    Sathasivam, Saratha

    2008-01-01

    Knowledge could be gained from experts, specialists in the area of interest, or it can be gained by induction from sets of data. Automatic induction of knowledge from data sets, usually stored in large databases, is called data mining. Data mining methods are important in the management of complex systems. There are many technologies available to data mining practitioners, including Artificial Neural Networks, Regression, and Decision Trees. Neural networks have been successfully applied in wide range of supervised and unsupervised learning applications. Neural network methods are not commonly used for data mining tasks, because they often produce incomprehensible models, and require long training times. One way in which the collective properties of a neural network may be used to implement a computational task is by way of the concept of energy minimization. The Hopfield network is well-known example of such an approach. The Hopfield network is useful as content addressable memory or an analog computer for s...

  18. Neural Networks in Control Applications

    DEFF Research Database (Denmark)

    Sørensen, O.

    The intention of this report is to make a systematic examination of the possibilities of applying neural networks in those technical areas, which are familiar to a control engineer. In other words, the potential of neural networks in control applications is given higher priority than a detailed...... study of the networks themselves. With this end in view the following restrictions have been made: - Amongst numerous neural network structures, only the Multi Layer Perceptron (a feed-forward network) is applied. - Amongst numerous training algorithms, only four algorithms are examined, all...... in a recursive form (sample updating). The simplest is the Back Probagation Error Algorithm, and the most complex is the recursive Prediction Error Method using a Gauss-Newton search direction. - Over-fitting is often considered to be a serious problem when training neural networks. This problem is specifically...

  19. Medical diagnosis using neural network

    CERN Document Server

    Kamruzzaman, S M; Siddiquee, Abu Bakar; Mazumder, Md Ehsanul Hoque

    2010-01-01

    This research is to search for alternatives to the resolution of complex medical diagnosis where human knowledge should be apprehended in a general fashion. Successful application examples show that human diagnostic capabilities are significantly worse than the neural diagnostic system. This paper describes a modified feedforward neural network constructive algorithm (MFNNCA), a new algorithm for medical diagnosis. The new constructive algorithm with backpropagation; offer an approach for the incremental construction of near-minimal neural network architectures for pattern classification. The algorithm starts with minimal number of hidden units in the single hidden layer; additional units are added to the hidden layer one at a time to improve the accuracy of the network and to get an optimal size of a neural network. The MFNNCA was tested on several benchmarking classification problems including the cancer, heart disease and diabetes. Experimental results show that the MFNNCA can produce optimal neural networ...

  20. Brain rhythms reveal a hierarchical network organization.

    Directory of Open Access Journals (Sweden)

    G Karl Steinke

    2011-10-01

    Full Text Available Recordings of ongoing neural activity with EEG and MEG exhibit oscillations of specific frequencies over a non-oscillatory background. The oscillations appear in the power spectrum as a collection of frequency bands that are evenly spaced on a logarithmic scale, thereby preventing mutual entrainment and cross-talk. Over the last few years, experimental, computational and theoretical studies have made substantial progress on our understanding of the biophysical mechanisms underlying the generation of network oscillations and their interactions, with emphasis on the role of neuronal synchronization. In this paper we ask a very different question. Rather than investigating how brain rhythms emerge, or whether they are necessary for neural function, we focus on what they tell us about functional brain connectivity. We hypothesized that if we were able to construct abstract networks, or "virtual brains", whose dynamics were similar to EEG/MEG recordings, those networks would share structural features among themselves, and also with real brains. Applying mathematical techniques for inverse problems, we have reverse-engineered network architectures that generate characteristic dynamics of actual brains, including spindles and sharp waves, which appear in the power spectrum as frequency bands superimposed on a non-oscillatory background dominated by low frequencies. We show that all reconstructed networks display similar topological features (e.g. structural motifs and dynamics. We have also reverse-engineered putative diseased brains (epileptic and schizophrenic, in which the oscillatory activity is altered in different ways, as reported in clinical studies. These reconstructed networks show consistent alterations of functional connectivity and dynamics. In particular, we show that the complexity of the network, quantified as proposed by Tononi, Sporns and Edelman, is a good indicator of brain fitness, since virtual brains modeling diseased states

  1. Microscopic instability in recurrent neural networks

    Science.gov (United States)

    Yamanaka, Yuzuru; Amari, Shun-ichi; Shinomoto, Shigeru

    2015-03-01

    In a manner similar to the molecular chaos that underlies the stable thermodynamics of gases, a neuronal system may exhibit microscopic instability in individual neuronal dynamics while a macroscopic order of the entire population possibly remains stable. In this study, we analyze the microscopic stability of a network of neurons whose macroscopic activity obeys stable dynamics, expressing either monostable, bistable, or periodic state. We reveal that the network exhibits a variety of dynamical states for microscopic instability residing in a given stable macroscopic dynamics. The presence of a variety of dynamical states in such a simple random network implies more abundant microscopic fluctuations in real neural networks which consist of more complex and hierarchically structured interactions.

  2. Artificial Neural Network Analysis System

    Science.gov (United States)

    2007-11-02

    Contract No. DASG60-00-M-0201 Purchase request no.: Foot in the Door-01 Title Name: Artificial Neural Network Analysis System Company: Atlantic... Artificial Neural Network Analysis System 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Powell, Bruce C 5d. PROJECT NUMBER 5e. TASK NUMBER...34) 27-02-2001 Report Type N/A Dates Covered (from... to) ("DD MON YYYY") 28-10-2000 27-02-2001 Title and Subtitle Artificial Neural Network Analysis

  3. HIDEN: Hierarchical decomposition of regulatory networks

    Directory of Open Access Journals (Sweden)

    Gülsoy Günhan

    2012-09-01

    Full Text Available Abstract Background Transcription factors regulate numerous cellular processes by controlling the rate of production of each gene. The regulatory relations are modeled using transcriptional regulatory networks. Recent studies have shown that such networks have an underlying hierarchical organization. We consider the problem of discovering the underlying hierarchy in transcriptional regulatory networks. Results We first transform this problem to a mixed integer programming problem. We then use existing tools to solve the resulting problem. For larger networks this strategy does not work due to rapid increase in running time and space usage. We use divide and conquer strategy for such networks. We use our method to analyze the transcriptional regulatory networks of E. coli, H. sapiens and S. cerevisiae. Conclusions Our experiments demonstrate that: (i Our method gives statistically better results than three existing state of the art methods; (ii Our method is robust against errors in the data and (iii Our method’s performance is not affected by the different topologies in the data.

  4. Hierarchical mutual information for the comparison of hierarchical community structures in complex networks

    CERN Document Server

    Perotti, Juan Ignacio; Caldarelli, Guido

    2015-01-01

    The quest for a quantitative characterization of community and modular structure of complex networks produced a variety of methods and algorithms to classify different networks. However, it is not clear if such methods provide consistent, robust and meaningful results when considering hierarchies as a whole. Part of the problem is the lack of a similarity measure for the comparison of hierarchical community structures. In this work we give a contribution by introducing the {\\it hierarchical mutual information}, which is a generalization of the traditional mutual information, and allows to compare hierarchical partitions and hierarchical community structures. The {\\it normalized} version of the hierarchical mutual information should behave analogously to the traditional normalized mutual information. Here, the correct behavior of the hierarchical mutual information is corroborated on an extensive battery of numerical experiments. The experiments are performed on artificial hierarchies, and on the hierarchical ...

  5. Antiferromagnetic Ising Model in Hierarchical Networks

    Science.gov (United States)

    Cheng, Xiang; Boettcher, Stefan

    2015-03-01

    The Ising antiferromagnet is a convenient model of glassy dynamics. It can introduce geometric frustrations and may give rise to a spin glass phase and glassy relaxation at low temperatures [ 1 ] . We apply the antiferromagnetic Ising model to 3 hierarchical networks which share features of both small world networks and regular lattices. Their recursive and fixed structures make them suitable for exact renormalization group analysis as well as numerical simulations. We first explore the dynamical behaviors using simulated annealing and discover an extremely slow relaxation at low temperatures. Then we employ the Wang-Landau algorithm to investigate the energy landscape and the corresponding equilibrium behaviors for different system sizes. Besides the Monte Carlo methods, renormalization group [ 2 ] is used to study the equilibrium properties in the thermodynamic limit and to compare with the results from simulated annealing and Wang-Landau sampling. Supported through NSF Grant DMR-1207431.

  6. Neural networks and statistical learning

    CERN Document Server

    Du, Ke-Lin

    2014-01-01

    Providing a broad but in-depth introduction to neural network and machine learning in a statistical framework, this book provides a single, comprehensive resource for study and further research. All the major popular neural network models and statistical learning approaches are covered with examples and exercises in every chapter to develop a practical working understanding of the content. Each of the twenty-five chapters includes state-of-the-art descriptions and important research results on the respective topics. The broad coverage includes the multilayer perceptron, the Hopfield network, associative memory models, clustering models and algorithms, the radial basis function network, recurrent neural networks, principal component analysis, nonnegative matrix factorization, independent component analysis, discriminant analysis, support vector machines, kernel methods, reinforcement learning, probabilistic and Bayesian networks, data fusion and ensemble learning, fuzzy sets and logic, neurofuzzy models, hardw...

  7. Neural Networks in Control Applications

    DEFF Research Database (Denmark)

    Sørensen, O.

    examined, and it appears that considering 'normal' neural network models with, say, 500 samples, the problem of over-fitting is neglible, and therefore it is not taken into consideration afterwards. Numerous model types, often met in control applications, are implemented as neural network models....... - Control concepts including parameter estimation - Control concepts including inverse modelling - Control concepts including optimal control For each of the three groups, different control concepts and specific training methods are detailed described.Further, all control concepts are tested on the same......The intention of this report is to make a systematic examination of the possibilities of applying neural networks in those technical areas, which are familiar to a control engineer. In other words, the potential of neural networks in control applications is given higher priority than a detailed...

  8. The holographic neural network: Performance comparison with other neural networks

    Science.gov (United States)

    Klepko, Robert

    1991-10-01

    The artificial neural network shows promise for use in recognition of high resolution radar images of ships. The holographic neural network (HNN) promises a very large data storage capacity and excellent generalization capability, both of which can be achieved with only a few learning trials, unlike most neural networks which require on the order of thousands of learning trials. The HNN is specially designed for pattern association storage, and mathematically realizes the storage and retrieval mechanisms of holograms. The pattern recognition capability of the HNN was studied, and its performance was compared with five other commonly used neural networks: the Adaline, Hamming, bidirectional associative memory, recirculation, and back propagation networks. The patterns used for testing represented artificial high resolution radar images of ships, and appear as a two dimensional topology of peaks with various amplitudes. The performance comparisons showed that the HNN does not perform as well as the other neural networks when using the same test data. However, modification of the data to make it appear more Gaussian distributed, improved the performance of the network. The HNN performs best if the data is completely Gaussian distributed.

  9. Neural Network Communications Signal Processing

    Science.gov (United States)

    1994-08-01

    Technical Information Report for the Neural Network Communications Signal Processing Program, CDRL A003, 31 March 1993. Software Development Plan for...track changing jamming conditions to provide the decoder with the best log- likelihood ratio metrics at a given time. As part of our development plan we...Artificial Neural Networks (ICANN-91) Volume 2, June 24-28, 1991, pp. 1677-1680. Kohonen, Teuvo, Raivio, Kimmo, Simula, Oli, Venta , 011i, Henriksson

  10. What are artificial neural networks?

    DEFF Research Database (Denmark)

    Krogh, Anders

    2008-01-01

    Artificial neural networks have been applied to problems ranging from speech recognition to prediction of protein secondary structure, classification of cancers and gene prediction. How do they work and what might they be good for? Udgivelsesdato: 2008-Feb......Artificial neural networks have been applied to problems ranging from speech recognition to prediction of protein secondary structure, classification of cancers and gene prediction. How do they work and what might they be good for? Udgivelsesdato: 2008-Feb...

  11. Time-domain analysis of neural tracking of hierarchical linguistic structures.

    Science.gov (United States)

    Zhang, Wen; Ding, Nai

    2017-02-01

    When listening to continuous speech, cortical activity measured by MEG concurrently follows the rhythms of multiple linguistic structures, e.g., syllables, phrases, and sentences. This phenomenon was previously characterized in the frequency domain. Here, we investigate the waveform of neural activity tracking linguistic structures in the time domain and quantify the coherence of neural response phases over subjects listening to the same stimulus. These analyses are achieved by decomposing the multi-channel MEG recordings into components that maximize the correlation between neural response waveforms across listeners. Each MEG component can be viewed as the recording from a virtual sensor that is spatially tuned to a cortical network showing coherent neural activity over subjects. This analysis reveals information not available from previous frequency-domain analysis of MEG global field power: First, concurrent neural tracking of hierarchical linguistic structures emerges at the beginning of the stimulus, rather than slowly building up after repetitions of the same sentential structure. Second, neural tracking of the sentential structure is reflected by slow neural fluctuations, rather than, e.g., a series of short-lasting transient responses at sentential boundaries. Lastly and most importantly, it shows that the MEG responses tracking the syllabic rhythm are spatially separable from the MEG responses tracking the sentential and phrasal rhythms.

  12. VLSI implementation of neural networks.

    Science.gov (United States)

    Wilamowski, B M; Binfet, J; Kaynak, M O

    2000-06-01

    Currently, fuzzy controllers are the most popular choice for hardware implementation of complex control surfaces because they are easy to design. Neural controllers are more complex and hard to train, but provide an outstanding control surface with much less error than that of a fuzzy controller. There are also some problems that have to be solved before the networks can be implemented on VLSI chips. First, an approximation function needs to be developed because CMOS neural networks have an activation function different than any function used in neural network software. Next, this function has to be used to train the network. Finally, the last problem for VLSI designers is the quantization effect caused by discrete values of the channel length (L) and width (W) of MOS transistor geometries. Two neural networks were designed in 1.5 microm technology. Using adequate approximation functions solved the problem of activation function. With this approach, trained networks were characterized by very small errors. Unfortunately, when the weights were quantized, errors were increased by an order of magnitude. However, even though the errors were enlarged, the results obtained from neural network hardware implementations were superior to the results obtained with fuzzy system approach.

  13. MODELACIÓN DE LA ESTRUCTURA JERÁRQUICA DE MACROINVERTEBRADOS BENTÓNICOS A TRAVÉS DE REDES NEURONALES ARTIFICIALES Modeling of the Hierarchical Structure of Freshwater Macroinvertebrates Using Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    CLAUDIA RICO

    ordination or clustering. Currently, analytical tools of bio-inspired computation belonging to the area of artificial intelligence are available to achieve ecological models with desirable characteristics, such as; flexibility, accuracy, robustness and reliability. In this context, this study employed two computational methods useful in ecoinformatics referring to artificial neural networks (RNAR for the modeling of the hierarchical structure of a benthic macroinvertebrate community in self-organization and prediction terms. The first ANN modeling method consisted of a Kohonen self-organization map (SOM, a non-supervised learning tool that classify the species of macroinvertebrates; this SOM in the input layer of gets the abundance of each ‘taxa’ from the data matrix, while in the output layer was visualized the computational results. Thus, in the output layer the species are organized in fifteen units and four hierarchical clusters. The second ANN method applied consisted of a multilayer feed-forward perceptron net with back-propagation algorithm to predict the three major insect orders; this means, Ephemeroptera, Coleoptera and Trichoptera (ECT richness and abundance using a set of nine physical-chemical variables. This ANN architecture included a neuron for each environmental variable, a hidden layer with seven neurons and a neuron in the output layer for ECT prediction. The results suggest that both types of ANN used, SOM and perceptron, were correspondingly related to the hierarchical patterns and with the richness and abundance patterns’ predictions, and gave the data analysis and understanding of the dynamic of the macroinvertebrates community, in a correct way.

  14. Complex-Valued Neural Networks

    CERN Document Server

    Hirose, Akira

    2012-01-01

    This book is the second enlarged and revised edition of the first successful monograph on complex-valued neural networks (CVNNs) published in 2006, which lends itself to graduate and undergraduate courses in electrical engineering, informatics, control engineering, mechanics, robotics, bioengineering, and other relevant fields. In the second edition the recent trends in CVNNs research are included, resulting in e.g. almost a doubled number of references. The parametron invented in 1954 is also referred to with discussion on analogy and disparity. Also various additional arguments on the advantages of the complex-valued neural networks enhancing the difference to real-valued neural networks are given in various sections. The book is useful for those beginning their studies, for instance, in adaptive signal processing for highly functional sensing and imaging, control in unknown and changing environment, robotics inspired by human neural systems, and brain-like information processing, as well as interdisciplina...

  15. Antenna analysis using neural networks

    Science.gov (United States)

    Smith, William T.

    1992-01-01

    Conventional computing schemes have long been used to analyze problems in electromagnetics (EM). The vast majority of EM applications require computationally intensive algorithms involving numerical integration and solutions to large systems of equations. The feasibility of using neural network computing algorithms for antenna analysis is investigated. The ultimate goal is to use a trained neural network algorithm to reduce the computational demands of existing reflector surface error compensation techniques. Neural networks are computational algorithms based on neurobiological systems. Neural nets consist of massively parallel interconnected nonlinear computational elements. They are often employed in pattern recognition and image processing problems. Recently, neural network analysis has been applied in the electromagnetics area for the design of frequency selective surfaces and beam forming networks. The backpropagation training algorithm was employed to simulate classical antenna array synthesis techniques. The Woodward-Lawson (W-L) and Dolph-Chebyshev (D-C) array pattern synthesis techniques were used to train the neural network. The inputs to the network were samples of the desired synthesis pattern. The outputs are the array element excitations required to synthesize the desired pattern. Once trained, the network is used to simulate the W-L or D-C techniques. Various sector patterns and cosecant-type patterns (27 total) generated using W-L synthesis were used to train the network. Desired pattern samples were then fed to the neural network. The outputs of the network were the simulated W-L excitations. A 20 element linear array was used. There were 41 input pattern samples with 40 output excitations (20 real parts, 20 imaginary). A comparison between the simulated and actual W-L techniques is shown for a triangular-shaped pattern. Dolph-Chebyshev is a different class of synthesis technique in that D-C is used for side lobe control as opposed to pattern

  16. Field experiment on a robust hierarchical metropolitan quantum cryptography network

    Institute of Scientific and Technical Information of China (English)

    XU FangXing; CHEN Wei; WANG Shuang; YIN ZhenQiang; ZHANG Yang; LIU Yun; ZHOU Zheng; ZHAO YiBo; LI HongWei; LIU Dong; HAN ZhengFu; GUO GuangCan

    2009-01-01

    these bureaus.The whole implementation including the hierarchical quantum cryptographic communication network links and the corresponding application software shows a big step toward the practical user-oriented network with a high security level.

  17. Fractional Hopfield Neural Networks: Fractional Dynamic Associative Recurrent Neural Networks.

    Science.gov (United States)

    Pu, Yi-Fei; Yi, Zhang; Zhou, Ji-Liu

    2016-07-14

    This paper mainly discusses a novel conceptual framework: fractional Hopfield neural networks (FHNN). As is commonly known, fractional calculus has been incorporated into artificial neural networks, mainly because of its long-term memory and nonlocality. Some researchers have made interesting attempts at fractional neural networks and gained competitive advantages over integer-order neural networks. Therefore, it is naturally makes one ponder how to generalize the first-order Hopfield neural networks to the fractional-order ones, and how to implement FHNN by means of fractional calculus. We propose to introduce a novel mathematical method: fractional calculus to implement FHNN. First, we implement fractor in the form of an analog circuit. Second, we implement FHNN by utilizing fractor and the fractional steepest descent approach, construct its Lyapunov function, and further analyze its attractors. Third, we perform experiments to analyze the stability and convergence of FHNN, and further discuss its applications to the defense against chip cloning attacks for anticounterfeiting. The main contribution of our work is to propose FHNN in the form of an analog circuit by utilizing a fractor and the fractional steepest descent approach, construct its Lyapunov function, prove its Lyapunov stability, analyze its attractors, and apply FHNN to the defense against chip cloning attacks for anticounterfeiting. A significant advantage of FHNN is that its attractors essentially relate to the neuron's fractional order. FHNN possesses the fractional-order-stability and fractional-order-sensitivity characteristics.

  18. Functional model of biological neural networks.

    Science.gov (United States)

    Lo, James Ting-Ho

    2010-12-01

    A functional model of biological neural networks, called temporal hierarchical probabilistic associative memory (THPAM), is proposed in this paper. THPAM comprises functional models of dendritic trees for encoding inputs to neurons, a first type of neuron for generating spike trains, a second type of neuron for generating graded signals to modulate neurons of the first type, supervised and unsupervised Hebbian learning mechanisms for easy learning and retrieving, an arrangement of dendritic trees for maximizing generalization, hardwiring for rotation-translation-scaling invariance, and feedback connections with different delay durations for neurons to make full use of present and past informations generated by neurons in the same and higher layers. These functional models and their processing operations have many functions of biological neural networks that have not been achieved by other models in the open literature and provide logically coherent answers to many long-standing neuroscientific questions. However, biological justifications of these functional models and their processing operations are required for THPAM to qualify as a macroscopic model (or low-order approximate) of biological neural networks.

  19. A Hierarchical Sensor Network Based on Voronoi Diagram

    Institute of Scientific and Technical Information of China (English)

    SHANG Rui-qiang; ZHAO Jian-li; SUN Qiu-xia; WANG Guang-xing

    2006-01-01

    A hierarchical sensor network is proposed which places the sensing and routing capacity at different layer nodes.It thus simplifies the hardware design and reduces cost. Adopting Voronoi diagram in the partition of backbone network,a mathematical model of data aggregation based on hierarchical architecture is given. Simulation shows that the number of transmission data packages is sharply cut down in the network, thus reducing the needs in the bandwidth and energy resources and is thus well adapted to sensor networks.

  20. Automatic Construction of Hierarchical Road Networks

    Science.gov (United States)

    Yang, Weiping

    2016-06-01

    This paper describes an automated method of constructing a hierarchical road network given a single dataset, without the presence of thematic attributes. The method is based on a pattern graph which maintains nodes and paths as junctions and through-traffic roads. The hierarchy is formed incrementally in a top-down fashion for highways, ramps, and major roads directly connected to ramps; and bottom-up for the rest of major and minor roads. Through reasoning and analysis, ramps are identified as unique characteristics for recognizing and assembling high speed roads. The method makes distinctions on the types of ramps by articulating their connection patterns with highways. Major and minor roads will be identified by both quantitative and qualitative analysis of spatial properties and by discovering neighbourhood patterns revealed in the data. The result of the method would enrich data description and support comprehensive queries on sorted exit or entry points on highways and their related roads. The enrichment on road network data is important to a high successful rate of feature matching for road networks and to geospatial data integration.

  1. Multigradient for Neural Networks for Equalizers

    Directory of Open Access Journals (Sweden)

    Chulhee Lee

    2003-06-01

    Full Text Available Recently, a new training algorithm, multigradient, has been published for neural networks and it is reported that the multigradient outperforms the backpropagation when neural networks are used as a classifier. When neural networks are used as an equalizer in communications, they can be viewed as a classifier. In this paper, we apply the multigradient algorithm to train the neural networks that are used as equalizers. Experiments show that the neural networks trained using the multigradient noticeably outperforms the neural networks trained by the backpropagation.

  2. Multilevel compression of random walks on networks reveals hierarchical organization in large integrated systems.

    Directory of Open Access Journals (Sweden)

    Martin Rosvall

    Full Text Available To comprehend the hierarchical organization of large integrated systems, we introduce the hierarchical map equation, which reveals multilevel structures in networks. In this information-theoretic approach, we exploit the duality between compression and pattern detection; by compressing a description of a random walker as a proxy for real flow on a network, we find regularities in the network that induce this system-wide flow. Finding the shortest multilevel description of the random walker therefore gives us the best hierarchical clustering of the network--the optimal number of levels and modular partition at each level--with respect to the dynamics on the network. With a novel search algorithm, we extract and illustrate the rich multilevel organization of several large social and biological networks. For example, from the global air traffic network we uncover countries and continents, and from the pattern of scientific communication we reveal more than 100 scientific fields organized in four major disciplines: life sciences, physical sciences, ecology and earth sciences, and social sciences. In general, we find shallow hierarchical structures in globally interconnected systems, such as neural networks, and rich multilevel organizations in systems with highly separated regions, such as road networks.

  3. Relations Between Wavelet Network and Feedforward Neural Network

    Institute of Scientific and Technical Information of China (English)

    刘志刚; 何正友; 钱清泉

    2002-01-01

    A comparison of construction forms and base functions is made between feedforward neural network and wavelet network. The relations between them are studied from the constructions of wavelet functions or dilation functions in wavelet network by different activation functions in feedforward neural network. It is concluded that some wavelet function is equal to the linear combination of several neurons in feedforward neural network.

  4. Hierarchical Network Models for Education Research: Hierarchical Latent Space Models

    Science.gov (United States)

    Sweet, Tracy M.; Thomas, Andrew C.; Junker, Brian W.

    2013-01-01

    Intervention studies in school systems are sometimes aimed not at changing curriculum or classroom technique, but rather at changing the way that teachers, teaching coaches, and administrators in schools work with one another--in short, changing the professional social networks of educators. Current methods of social network analysis are…

  5. Big Data Processing in Complex Hierarchical Network Systems

    CERN Document Server

    Polishchuk, Olexandr; Tyutyunnyk, Maria; Yadzhak, Mykhailo

    2016-01-01

    This article covers the problem of processing of Big Data that describe process of complex networks and network systems operation. It also introduces the notion of hierarchical network systems combination into associations and conglomerates alongside with complex networks combination into multiplexes. The analysis is provided for methods of global network structures study depending on the purpose of the research. Also the main types of information flows in complex hierarchical network systems being the basic components of associations and conglomerates are covered. Approaches are proposed for creation of efficient computing environments, distributed computations organization and information processing methods parallelization at different levels of system hierarchy.

  6. Plant Growth Models Using Artificial Neural Networks

    Science.gov (United States)

    Bubenheim, David

    1997-01-01

    In this paper, we descrive our motivation and approach to devloping models and the neural network architecture. Initial use of the artificial neural network for modeling the single plant process of transpiration is presented.

  7. Ocean wave forecasting using recurrent neural networks

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Prabaharan, N.

    , merchant vessel routing, nearshore construction, etc. more efficiently and safely. This paper describes an artificial neural network, namely recurrent neural network with rprop update algorithm and is applied for wave forecasting. Measured ocean waves off...

  8. Generalization performance of regularized neural network models

    DEFF Research Database (Denmark)

    Larsen, Jan; Hansen, Lars Kai

    1994-01-01

    Architecture optimization is a fundamental problem of neural network modeling. The optimal architecture is defined as the one which minimizes the generalization error. This paper addresses estimation of the generalization performance of regularized, complete neural network models. Regularization...

  9. Improved transformer protection using probabilistic neural network ...

    African Journals Online (AJOL)

    user

    This article presents a novel technique to distinguish between magnetizing inrush ... Protective relaying, Probabilistic neural network, Active power relays, Power ... Forward Neural Network (MFFNN) with back-propagation learning technique.

  10. Neural Network for Sparse Reconstruction

    Directory of Open Access Journals (Sweden)

    Qingfa Li

    2014-01-01

    Full Text Available We construct a neural network based on smoothing approximation techniques and projected gradient method to solve a kind of sparse reconstruction problems. Neural network can be implemented by circuits and can be seen as an important method for solving optimization problems, especially large scale problems. Smoothing approximation is an efficient technique for solving nonsmooth optimization problems. We combine these two techniques to overcome the difficulties of the choices of the step size in discrete algorithms and the item in the set-valued map of differential inclusion. In theory, the proposed network can converge to the optimal solution set of the given problem. Furthermore, some numerical experiments show the effectiveness of the proposed network in this paper.

  11. Neural networks and applications tutorial

    Science.gov (United States)

    Guyon, I.

    1991-09-01

    The importance of neural networks has grown dramatically during this decade. While only a few years ago they were primarily of academic interest, now dozens of companies and many universities are investigating the potential use of these systems and products are beginning to appear. The idea of building a machine whose architecture is inspired by that of the brain has roots which go far back in history. Nowadays, technological advances of computers and the availability of custom integrated circuits, permit simulations of hundreds or even thousands of neurons. In conjunction, the growing interest in learning machines, non-linear dynamics and parallel computation spurred renewed attention in artificial neural networks. Many tentative applications have been proposed, including decision systems (associative memories, classifiers, data compressors and optimizers), or parametric models for signal processing purposes (system identification, automatic control, noise canceling, etc.). While they do not always outperform standard methods, neural network approaches are already used in some real world applications for pattern recognition and signal processing tasks. The tutorial is divided into six lectures, that where presented at the Third Graduate Summer Course on Computational Physics (September 3-7, 1990) on Parallel Architectures and Applications, organized by the European Physical Society: (1) Introduction: machine learning and biological computation. (2) Adaptive artificial neurons (perceptron, ADALINE, sigmoid units, etc.): learning rules and implementations. (3) Neural network systems: architectures, learning algorithms. (4) Applications: pattern recognition, signal processing, etc. (5) Elements of learning theory: how to build networks which generalize. (6) A case study: a neural network for on-line recognition of handwritten alphanumeric characters.

  12. Meta-Learning Evolutionary Artificial Neural Networks

    OpenAIRE

    Abraham, Ajith

    2004-01-01

    In this paper, we present MLEANN (Meta-Learning Evolutionary Artificial Neural Network), an automatic computational framework for the adaptive optimization of artificial neural networks wherein the neural network architecture, activation function, connection weights; learning algorithm and its parameters are adapted according to the problem. We explored the performance of MLEANN and conventionally designed artificial neural networks for function approximation problems. To evaluate the compara...

  13. Building a Chaotic Proved Neural Network

    CERN Document Server

    Bahi, Jacques M; Salomon, Michel

    2011-01-01

    Chaotic neural networks have received a great deal of attention these last years. In this paper we establish a precise correspondence between the so-called chaotic iterations and a particular class of artificial neural networks: global recurrent multi-layer perceptrons. We show formally that it is possible to make these iterations behave chaotically, as defined by Devaney, and thus we obtain the first neural networks proven chaotic. Several neural networks with different architectures are trained to exhibit a chaotical behavior.

  14. Move Ordering using Neural Networks

    NARCIS (Netherlands)

    Kocsis, L.; Uiterwijk, J.; Van Den Herik, J.

    2001-01-01

    © Springer-Verlag Berlin Heidelberg 2001. The efficiency of alpha-beta search algorithms heavily depends on the order in which the moves are examined. This paper focuses on using neural networks to estimate the likelihood of a move being the best in a certain position. The moves considered more like

  15. Neural Network based Consumption Forecasting

    DEFF Research Database (Denmark)

    Madsen, Per Printz

    2016-01-01

    This paper describe a Neural Network based method for consumption forecasting. This work has been financed by the The ENCOURAGE project. The aims of The ENCOURAGE project is to develop embedded intelligence and integration technologies that will directly optimize energy use in buildings and enable...

  16. Spin glasses and neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Parga, N. (Comision Nacional de Energia Atomica, San Carlos de Bariloche (Argentina). Centro Atomico Bariloche; Universidad Nacional de Cuyo, San Carlos de Bariloche (Argentina). Inst. Balseiro)

    1989-07-01

    The mean-field theory of spin glass models has been used as a prototype of systems with frustration and disorder. One of the most interesting related systems are models of associative memories. In these lectures we review the main concepts developed to solve the Sherrington-Kirkpatrick model and its application to neural networks. (orig.).

  17. Artificial neural networks in medicine

    Energy Technology Data Exchange (ETDEWEB)

    Keller, P.E.

    1994-07-01

    This Technology Brief provides an overview of artificial neural networks (ANN). A definition and explanation of an ANN is given and situations in which an ANN is used are described. ANN applications to medicine specifically are then explored and the areas in which it is currently being used are discussed. Included are medical diagnostic aides, biochemical analysis, medical image analysis and drug development.

  18. Competition Based Neural Networks for Assignment Problems

    Institute of Scientific and Technical Information of China (English)

    李涛; LuyuanFang

    1991-01-01

    Competition based neural networks have been used to solve the generalized assignment problem and the quadratic assignment problem.Both problems are very difficult and are ε approximation complete.The neural network approach has yielded highly competitive performance and good performance for the quadratic assignment problem.These neural networks are guaranteed to produce feasible solutions.

  19. Analysis of neural networks through base functions

    NARCIS (Netherlands)

    van der Zwaag, B.J.; Slump, Cornelis H.; Spaanenburg, L.

    Problem statement. Despite their success-story, neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a "magic tool" but possibly even more

  20. Simplified LQG Control with Neural Networks

    DEFF Research Database (Denmark)

    Sørensen, O.

    1997-01-01

    A new neural network application for non-linear state control is described. One neural network is modelled to form a Kalmann predictor and trained to act as an optimal state observer for a non-linear process. Another neural network is modelled to form a state controller and trained to produce...

  1. Analysis of Neural Networks through Base Functions

    NARCIS (Netherlands)

    Zwaag, van der B.J.; Slump, C.H.; Spaanenburg, L.

    2002-01-01

    Problem statement. Despite their success-story, neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a "magic tool" but possibly even more

  2. Deep neural network with weight sparsity control and pre-training extracts hierarchical features and enhances classification performance: Evidence from whole-brain resting-state functional connectivity patterns of schizophrenia.

    Science.gov (United States)

    Kim, Junghoe; Calhoun, Vince D; Shim, Eunsoo; Lee, Jong-Hwan

    2016-01-01

    Functional connectivity (FC) patterns obtained from resting-state functional magnetic resonance imaging data are commonly employed to study neuropsychiatric conditions by using pattern classifiers such as the support vector machine (SVM). Meanwhile, a deep neural network (DNN) with multiple hidden layers has shown its ability to systematically extract lower-to-higher level information of image and speech data from lower-to-higher hidden layers, markedly enhancing classification accuracy. The objective of this study was to adopt the DNN for whole-brain resting-state FC pattern classification of schizophrenia (SZ) patients vs. healthy controls (HCs) and identification of aberrant FC patterns associated with SZ. We hypothesized that the lower-to-higher level features learned via the DNN would significantly enhance the classification accuracy, and proposed an adaptive learning algorithm to explicitly control the weight sparsity in each hidden layer via L1-norm regularization. Furthermore, the weights were initialized via stacked autoencoder based pre-training to further improve the classification performance. Classification accuracy was systematically evaluated as a function of (1) the number of hidden layers/nodes, (2) the use of L1-norm regularization, (3) the use of the pre-training, (4) the use of framewise displacement (FD) removal, and (5) the use of anatomical/functional parcellation. Using FC patterns from anatomically parcellated regions without FD removal, an error rate of 14.2% was achieved by employing three hidden layers and 50 hidden nodes with both L1-norm regularization and pre-training, which was substantially lower than the error rate from the SVM (22.3%). Moreover, the trained DNN weights (i.e., the learned features) were found to represent the hierarchical organization of aberrant FC patterns in SZ compared with HC. Specifically, pairs of nodes extracted from the lower hidden layer represented sparse FC patterns implicated in SZ, which was

  3. Road network safety evaluation using Bayesian hierarchical joint model.

    Science.gov (United States)

    Wang, Jie; Huang, Helai

    2016-05-01

    Safety and efficiency are commonly regarded as two significant performance indicators of transportation systems. In practice, road network planning has focused on road capacity and transport efficiency whereas the safety level of a road network has received little attention in the planning stage. This study develops a Bayesian hierarchical joint model for road network safety evaluation to help planners take traffic safety into account when planning a road network. The proposed model establishes relationships between road network risk and micro-level variables related to road entities and traffic volume, as well as socioeconomic, trip generation and network density variables at macro level which are generally used for long term transportation plans. In addition, network spatial correlation between intersections and their connected road segments is also considered in the model. A road network is elaborately selected in order to compare the proposed hierarchical joint model with a previous joint model and a negative binomial model. According to the results of the model comparison, the hierarchical joint model outperforms the joint model and negative binomial model in terms of the goodness-of-fit and predictive performance, which indicates the reasonableness of considering the hierarchical data structure in crash prediction and analysis. Moreover, both random effects at the TAZ level and the spatial correlation between intersections and their adjacent segments are found to be significant, supporting the employment of the hierarchical joint model as an alternative in road-network-level safety modeling as well.

  4. Road Network Selection Based on Road Hierarchical Structure Control

    Directory of Open Access Journals (Sweden)

    HE Haiwei

    2015-04-01

    Full Text Available A new road network selection method based on hierarchical structure is studied. Firstly, road network is built as strokes which are then classified into hierarchical collections according to the criteria of betweenness centrality value (BC value. Secondly, the hierarchical structure of the strokes is enhanced using structural characteristic identification technique. Thirdly, the importance calculation model was established according to the relationships among the hierarchical structure of the strokes. Finally, the importance values of strokes are got supported with the model's hierarchical calculation, and with which the road network is selected. Tests are done to verify the advantage of this method by comparing it with other common stroke-oriented methods using three kinds of typical road network data. Comparision of the results show that this method had few need to semantic data, and could eliminate the negative influence of edge strokes caused by the criteria of BC value well. So, it is better to maintain the global hierarchical structure of road network, and suitable to meet with the selection of various kinds of road network at the same time.

  5. Quantum computing in neural networks

    CERN Document Server

    Gralewicz, P

    2004-01-01

    According to the statistical interpretation of quantum theory, quantum computers form a distinguished class of probabilistic machines (PMs) by encoding n qubits in 2n pbits. This raises the possibility of a large-scale quantum computing using PMs, especially with neural networks which have the innate capability for probabilistic information processing. Restricting ourselves to a particular model, we construct and numerically examine the performance of neural circuits implementing universal quantum gates. A discussion on the physiological plausibility of proposed coding scheme is also provided.

  6. Discontinuities in recurrent neural networks.

    Science.gov (United States)

    Gavaldá, R; Siegelmann, H T

    1999-04-01

    This article studies the computational power of various discontinuous real computational models that are based on the classical analog recurrent neural network (ARNN). This ARNN consists of finite number of neurons; each neuron computes a polynomial net function and a sigmoid-like continuous activation function. We introduce arithmetic networks as ARNN augmented with a few simple discontinuous (e.g., threshold or zero test) neurons. We argue that even with weights restricted to polynomial time computable reals, arithmetic networks are able to compute arbitrarily complex recursive functions. We identify many types of neural networks that are at least as powerful as arithmetic nets, some of which are not in fact discontinuous, but they boost other arithmetic operations in the net function (e.g., neurons that can use divisions and polynomial net functions inside sigmoid-like continuous activation functions). These arithmetic networks are equivalent to the Blum-Shub-Smale model, when the latter is restricted to a bounded number of registers. With respect to implementation on digital computers, we show that arithmetic networks with rational weights can be simulated with exponential precision, but even with polynomial-time computable real weights, arithmetic networks are not subject to any fixed precision bounds. This is in contrast with the ARNN that are known to demand precision that is linear in the computation time. When nontrivial periodic functions (e.g., fractional part, sine, tangent) are added to arithmetic networks, the resulting networks are computationally equivalent to a massively parallel machine. Thus, these highly discontinuous networks can solve the presumably intractable class of PSPACE-complete problems in polynomial time.

  7. Deep Dynamic Neural Networks for Multimodal Gesture Segmentation and Recognition

    OpenAIRE

    Wu, Di; Pigou, Lionel; Kindermans, Pieter-Jan; Le, Nam Do-Hoang; Shao, Ling; Dambre, Joni; Odobez, Jean-Marc

    2016-01-01

    This paper describes a novel method called Deep Dynamic Neural Networks (DDNN) for multimodal gesture recognition. A semi-supervised hierarchical dynamic framework based on a Hidden Markov Model (HMM) is proposed for simultaneous gesture segmentation and recognition where skeleton joint information, depth and RGB images, are the multimodal input observations. Unlike most traditional approaches that rely on the construction of complex handcrafted features, our approach learns high-level spatio...

  8. Multipath Convolutional-Recursive Neural Networks for Object Recognition

    OpenAIRE

    2014-01-01

    Part 8: Pattern Recognition; International audience; Extracting good representations from images is essential for many computer vision tasks. While progress in deep learning shows the importance of learning hierarchical features, it is also important to learn features through multiple paths. This paper presents Multipath Convolutional-Recursive Neural Networks(M-CRNNs), a novel scheme which aims to learn image features from multiple paths using models based on combination of convolutional and...

  9. Fuzzy logic systems are equivalent to feedforward neural networks

    Institute of Scientific and Technical Information of China (English)

    李洪兴

    2000-01-01

    Fuzzy logic systems and feedforward neural networks are equivalent in essence. First, interpolation representations of fuzzy logic systems are introduced and several important conclusions are given. Then three important kinds of neural networks are defined, i.e. linear neural networks, rectangle wave neural networks and nonlinear neural networks. Then it is proved that nonlinear neural networks can be represented by rectangle wave neural networks. Based on the results mentioned above, the equivalence between fuzzy logic systems and feedforward neural networks is proved, which will be very useful for theoretical research or applications on fuzzy logic systems or neural networks by means of combining fuzzy logic systems with neural networks.

  10. Fiber optic Adaline neural networks

    Science.gov (United States)

    Ghosh, Anjan K.; Trepka, Jim; Paparao, Palacharla

    1993-02-01

    Optoelectronic realization of adaptive filters and equalizers using fiber optic tapped delay lines and spatial light modulators has been discussed recently. We describe the design of a single layer fiber optic Adaline neural network which can be used as a bit pattern classifier. In our realization we employ as few electronic devices as possible and use optical computation to utilize the advantages of optics in processing speed, parallelism, and interconnection. The new optical neural network described in this paper is designed for optical processing of guided lightwave signals, not electronic signals. We analyzed the convergence or learning characteristics of the optically implemented Adaline in the presence of errors in the hardware, and we studied methods for improving the convergence rate of the Adaline.

  11. Neural Networks Methodology and Applications

    CERN Document Server

    Dreyfus, Gérard

    2005-01-01

    Neural networks represent a powerful data processing technique that has reached maturity and broad application. When clearly understood and appropriately used, they are a mandatory component in the toolbox of any engineer who wants make the best use of the available data, in order to build models, make predictions, mine data, recognize shapes or signals, etc. Ranging from theoretical foundations to real-life applications, this book is intended to provide engineers and researchers with clear methodologies for taking advantage of neural networks in industrial, financial or banking applications, many instances of which are presented in the book. For the benefit of readers wishing to gain deeper knowledge of the topics, the book features appendices that provide theoretical details for greater insight, and algorithmic details for efficient programming and implementation. The chapters have been written by experts ands seemlessly edited to present a coherent and comprehensive, yet not redundant, practically-oriented...

  12. Neural Networks for Speech Application.

    Science.gov (United States)

    1987-11-01

    operation and neurocrience theories of how neurons process information in the brain. design. Early studies by McCulloch and Pitts dunng the forties led to...developed the commercially available Mark III and Mark IV neurocom- established by McCulloch and Pits. puters that model neural networks and run...ORGANIZERS Infonuiaonienes (1986) FOR Lashley, K. Brain Mehaius and Cblali (129)SPEECHOTECH 󈨜 McCullch. W and Pitts . W, ’A Logical Calculusof the

  13. Analog electronic neural network circuits

    Energy Technology Data Exchange (ETDEWEB)

    Graf, H.P.; Jackel, L.D. (AT and T Bell Labs., Holmdel, NJ (USA))

    1989-07-01

    The large interconnectivity and moderate precision required in neural network models present new opportunities for analog computing. This paper discusses analog circuits for a variety of problems such as pattern matching, optimization, and learning. Most of the circuits build so far are relatively small, exploratory designs. The most mature circuits are those for template matching. Chips performing this function are now being applied to pattern recognition problems.

  14. The LILARTI neural network system

    Energy Technology Data Exchange (ETDEWEB)

    Allen, J.D. Jr.; Schell, F.M.; Dodd, C.V.

    1992-10-01

    The material of this Technical Memorandum is intended to provide the reader with conceptual and technical background information on the LILARTI neural network system of detail sufficient to confer an understanding of the LILARTI method as it is presently allied and to facilitate application of the method to problems beyond the scope of this document. Of particular importance in this regard are the descriptive sections and the Appendices which include operating instructions, partial listings of program output and data files, and network construction information.

  15. Process Neural Networks Theory and Applications

    CERN Document Server

    He, Xingui

    2010-01-01

    "Process Neural Networks - Theory and Applications" proposes the concept and model of a process neural network for the first time, showing how it expands the mapping relationship between the input and output of traditional neural networks, and enhancing the expression capability for practical problems, with broad applicability to solving problems relating to process in practice. Some theoretical problems such as continuity, functional approximation capability, and computing capability, are strictly proved. The application methods, network construction principles, and optimization alg

  16. Neural network subtyping of depression.

    Science.gov (United States)

    Florio, T M; Parker, G; Austin, M P; Hickie, I; Mitchell, P; Wilhelm, K

    1998-10-01

    To examine the applicability of a neural network classification strategy to examine the independent contribution of psychomotor disturbance (PMD) and endogeneity symptoms to the DSM-III-R definition of melancholia. We studied 407 depressed patients with the clinical dataset comprising 17 endogeneity symptoms and the 18-item CORE measure of behaviourally rated PMD. A multilayer perception neural network was used to fit non-linear models of varying complexity. A linear discriminant function analysis was also used to generate a model for comparison with the non-linear models. Models (linear and non-linear) using PMD items only and endogeneity symptoms only had similar rates of successful classification, while non-linear models combining both PMD and symptoms scores achieved the best classifications. Our current non-linear model was superior to a linear analysis, a finding which may have wider application to psychiatric classification. Our non-linear analysis of depressive subtypes supports the binary view that melancholic and non-melancholic depression are separate clinical disorders rather than different forms of the same entity. This study illustrates how non-linear modelling with neural networks is a potentially fruitful approach to the study of the diagnostic taxonomy of psychiatric disorders and to clinical decision-making.

  17. Layer Winner-Take-All neural networks based on existing competitive structures.

    Science.gov (United States)

    Chen, C M; Yang, J F

    2000-01-01

    In this paper, we propose generalized layer winner-take-all (WTA) neural networks based on the suggested full WTA networks, which can be extended from any existing WTA structure with a simple weighted-and-sum neuron. With modular regularity and local connection, the layer WTA network in either hierarchical or recursive structure is suitable for a large number of competitors. The complexity and convergence performances of layer and direct WTA neural networks are analyzed. Simulation results and theoretical analyzes verify that the layer WTA neural networks with extendibility outperform their original direct WTA structures in aspects of low complexity and fast convergence.

  18. Novel quantum inspired binary neural network algorithm

    Indian Academy of Sciences (India)

    OM PRAKASH PATEL; ARUNA TIWARI

    2016-11-01

    In this paper, a quantum based binary neural network algorithm is proposed, named as novel quantum binary neural network algorithm (NQ-BNN). It forms a neural network structure by deciding weights and separability parameter in quantum based manner. Quantum computing concept represents solution probabilistically and gives large search space to find optimal value of required parameters using Gaussian random number generator. The neural network structure forms constructively having three number of layers input layer: hidden layer and output layer. A constructive way of deciding the network eliminates the unnecessary training of neural network. A new parameter that is a quantum separability parameter (QSP) is introduced here, which finds an optimal separability plane to classify input samples. During learning, it searches for an optimal separability plane. This parameter is taken as the threshold of neuron for learning of neural network. This algorithm is tested with three benchmark datasets and produces improved results than existing quantum inspired and other classification approaches.

  19. Field Experiment on a Robust Hierarchical Metropolitan Quantum Cryptography Network

    CERN Document Server

    Xu, Fangxing; Wang, Shuang; Yin, Zhenqiang; Zhang, Yang; Liu, Yun; Zhou, Zheng; Zhao, Yibo; Li, Hongwei; Liu, Dong; Han, Zhengfu; Guo, Guangcan

    2009-01-01

    A hierarchical metropolitan quantum cryptography network upon the inner-city commercial telecom fiber cables is reported in this paper. The seven-user network contains a four-node backbone net with one node acting as the subnet gateway, a two-user subnet and a single-fiber access link, which is realized by the Faraday-Michelson Interferometer set-ups. The techniques of the quantum router, optical switch and trusted relay are assembled here to guarantee the feasibility and expandability of the quantum cryptography network. Five nodes of the network are located in the government departments and the secure keys generated by the quantum key distribution network are utilized to encrypt the instant video, sound, text messages and confidential files transmitting between these bureaus. The whole implementation including the hierarchical quantum cryptographic communication network links and corresponding application software shows a big step toward the practical user-oriented network with high security level.

  20. Practical neural network recipies in C++

    CERN Document Server

    Masters

    2014-01-01

    This text serves as a cookbook for neural network solutions to practical problems using C++. It will enable those with moderate programming experience to select a neural network model appropriate to solving a particular problem, and to produce a working program implementing that network. The book provides guidance along the entire problem-solving path, including designing the training set, preprocessing variables, training and validating the network, and evaluating its performance. Though the book is not intended as a general course in neural networks, no background in neural works is assum

  1. Understanding Neural Networks for Machine Learning using Microsoft Neural Network Algorithm

    National Research Council Canada - National Science Library

    Nagesh Ramprasad

    2016-01-01

    .... In this research, focus is on the Microsoft Neural System Algorithm. The Microsoft Neural System Algorithm is a simple implementation of the adaptable and popular neural networks that are used in the machine learning...

  2. Neural network modeling of emotion

    Science.gov (United States)

    Levine, Daniel S.

    2007-03-01

    This article reviews the history and development of computational neural network modeling of cognitive and behavioral processes that involve emotion. The exposition starts with models of classical conditioning dating from the early 1970s. Then it proceeds toward models of interactions between emotion and attention. Then models of emotional influences on decision making are reviewed, including some speculative (not and not yet simulated) models of the evolution of decision rules. Through the late 1980s, the neural networks developed to model emotional processes were mainly embodiments of significant functional principles motivated by psychological data. In the last two decades, network models of these processes have become much more detailed in their incorporation of known physiological properties of specific brain regions, while preserving many of the psychological principles from the earlier models. Most network models of emotional processes so far have dealt with positive and negative emotion in general, rather than specific emotions such as fear, joy, sadness, and anger. But a later section of this article reviews a few models relevant to specific emotions: one family of models of auditory fear conditioning in rats, and one model of induced pleasure enhancing creativity in humans. Then models of emotional disorders are reviewed. The article concludes with philosophical statements about the essential contributions of emotion to intelligent behavior and the importance of quantitative theories and models to the interdisciplinary enterprise of understanding the interactions of emotion, cognition, and behavior.

  3. Complex Evaluation of Hierarchically-Network Systems

    CERN Document Server

    Polishchuk, Dmytro; Yadzhak, Mykhailo

    2016-01-01

    Methods of complex evaluation based on local, forecasting, aggregated, and interactive evaluation of the state, function quality, and interaction of complex system's objects on the all hierarchical levels is proposed. Examples of analysis of the structural elements of railway transport system are used for illustration of efficiency of proposed approach.

  4. MEMBRAIN NEURAL NETWORK FOR VISUAL PATTERN RECOGNITION

    Directory of Open Access Journals (Sweden)

    Artur Popko

    2013-06-01

    Full Text Available Recognition of visual patterns is one of significant applications of Artificial Neural Networks, which partially emulate human thinking in the domain of artificial intelligence. In the paper, a simplified neural approach to recognition of visual patterns is portrayed and discussed. This paper is dedicated for investigators in visual patterns recognition, Artificial Neural Networking and related disciplines. The document describes also MemBrain application environment as a powerful and easy to use neural networks’ editor and simulator supporting ANN.

  5. Salience-Affected Neural Networks

    CERN Document Server

    Remmelzwaal, Leendert A; Ellis, George F R

    2010-01-01

    We present a simple neural network model which combines a locally-connected feedforward structure, as is traditionally used to model inter-neuron connectivity, with a layer of undifferentiated connections which model the diffuse projections from the human limbic system to the cortex. This new layer makes it possible to model global effects such as salience, at the same time as the local network processes task-specific or local information. This simple combination network displays interactions between salience and regular processing which correspond to known effects in the developing brain, such as enhanced learning as a result of heightened affect. The cortex biases neuronal responses to affect both learning and memory, through the use of diffuse projections from the limbic system to the cortex. Standard ANNs do not model this non-local flow of information represented by the ascending systems, which are a significant feature of the structure of the brain, and although they do allow associational learning with...

  6. Dynamic Analysis of Structures Using Neural Networks

    Directory of Open Access Journals (Sweden)

    N. Ahmadi

    2008-01-01

    Full Text Available In the recent years, neural networks are considered as the best candidate for fast approximation with arbitrary accuracy in the time consuming problems. Dynamic analysis of structures against earthquake has the time consuming process. We employed two kinds of neural networks: Generalized Regression neural network (GR and Back-Propagation Wavenet neural network (BPW, for approximating of dynamic time history response of frame structures. GR is a traditional radial basis function neural network while BPW categorized as a wavelet neural network. In BPW, sigmoid activation functions of hidden layer neurons are substituted with wavelets and weights training are achieved using Scaled Conjugate Gradient (SCG algorithm. Comparison the results of BPW with those of GR in the dynamic analysis of eight story steel frame indicates that accuracy of the properly trained BPW was better than that of GR and therefore, BPW can be efficiently used for approximate dynamic analysis of structures.

  7. Neural mechanisms underlying the computation of hierarchical tree structures in mathematics.

    Directory of Open Access Journals (Sweden)

    Tomoya Nakai

    Full Text Available Whether mathematical and linguistic processes share the same neural mechanisms has been a matter of controversy. By examining various sentence structures, we recently demonstrated that activations in the left inferior frontal gyrus (L. IFG and left supramarginal gyrus (L. SMG were modulated by the Degree of Merger (DoM, a measure for the complexity of tree structures. In the present study, we hypothesize that the DoM is also critical in mathematical calculations, and clarify whether the DoM in the hierarchical tree structures modulates activations in these regions. We tested an arithmetic task that involved linear and quadratic sequences with recursive computation. Using functional magnetic resonance imaging, we found significant activation in the L. IFG, L. SMG, bilateral intraparietal sulcus (IPS, and precuneus selectively among the tested conditions. We also confirmed that activations in the L. IFG and L. SMG were free from memory-related factors, and that activations in the bilateral IPS and precuneus were independent from other possible factors. Moreover, by fitting parametric models of eight factors, we found that the model of DoM in the hierarchical tree structures was the best to explain the modulation of activations in these five regions. Using dynamic causal modeling, we showed that the model with a modulatory effect for the connection from the L. IPS to the L. IFG, and with driving inputs into the L. IFG, was highly probable. The intrinsic, i.e., task-independent, connection from the L. IFG to the L. IPS, as well as that from the L. IPS to the R. IPS, would provide a feedforward signal, together with negative feedback connections. We indicate that mathematics and language share the network of the L. IFG and L. IPS/SMG for the computation of hierarchical tree structures, and that mathematics recruits the additional network of the L. IPS and R. IPS.

  8. Neural mechanisms underlying the computation of hierarchical tree structures in mathematics.

    Science.gov (United States)

    Nakai, Tomoya; Sakai, Kuniyoshi L

    2014-01-01

    Whether mathematical and linguistic processes share the same neural mechanisms has been a matter of controversy. By examining various sentence structures, we recently demonstrated that activations in the left inferior frontal gyrus (L. IFG) and left supramarginal gyrus (L. SMG) were modulated by the Degree of Merger (DoM), a measure for the complexity of tree structures. In the present study, we hypothesize that the DoM is also critical in mathematical calculations, and clarify whether the DoM in the hierarchical tree structures modulates activations in these regions. We tested an arithmetic task that involved linear and quadratic sequences with recursive computation. Using functional magnetic resonance imaging, we found significant activation in the L. IFG, L. SMG, bilateral intraparietal sulcus (IPS), and precuneus selectively among the tested conditions. We also confirmed that activations in the L. IFG and L. SMG were free from memory-related factors, and that activations in the bilateral IPS and precuneus were independent from other possible factors. Moreover, by fitting parametric models of eight factors, we found that the model of DoM in the hierarchical tree structures was the best to explain the modulation of activations in these five regions. Using dynamic causal modeling, we showed that the model with a modulatory effect for the connection from the L. IPS to the L. IFG, and with driving inputs into the L. IFG, was highly probable. The intrinsic, i.e., task-independent, connection from the L. IFG to the L. IPS, as well as that from the L. IPS to the R. IPS, would provide a feedforward signal, together with negative feedback connections. We indicate that mathematics and language share the network of the L. IFG and L. IPS/SMG for the computation of hierarchical tree structures, and that mathematics recruits the additional network of the L. IPS and R. IPS.

  9. Neural Mechanisms Underlying the Computation of Hierarchical Tree Structures in Mathematics

    Science.gov (United States)

    Nakai, Tomoya; Sakai, Kuniyoshi L.

    2014-01-01

    Whether mathematical and linguistic processes share the same neural mechanisms has been a matter of controversy. By examining various sentence structures, we recently demonstrated that activations in the left inferior frontal gyrus (L. IFG) and left supramarginal gyrus (L. SMG) were modulated by the Degree of Merger (DoM), a measure for the complexity of tree structures. In the present study, we hypothesize that the DoM is also critical in mathematical calculations, and clarify whether the DoM in the hierarchical tree structures modulates activations in these regions. We tested an arithmetic task that involved linear and quadratic sequences with recursive computation. Using functional magnetic resonance imaging, we found significant activation in the L. IFG, L. SMG, bilateral intraparietal sulcus (IPS), and precuneus selectively among the tested conditions. We also confirmed that activations in the L. IFG and L. SMG were free from memory-related factors, and that activations in the bilateral IPS and precuneus were independent from other possible factors. Moreover, by fitting parametric models of eight factors, we found that the model of DoM in the hierarchical tree structures was the best to explain the modulation of activations in these five regions. Using dynamic causal modeling, we showed that the model with a modulatory effect for the connection from the L. IPS to the L. IFG, and with driving inputs into the L. IFG, was highly probable. The intrinsic, i.e., task-independent, connection from the L. IFG to the L. IPS, as well as that from the L. IPS to the R. IPS, would provide a feedforward signal, together with negative feedback connections. We indicate that mathematics and language share the network of the L. IFG and L. IPS/SMG for the computation of hierarchical tree structures, and that mathematics recruits the additional network of the L. IPS and R. IPS. PMID:25379713

  10. Fast Algorithms for Convolutional Neural Networks

    OpenAIRE

    Lavin, Andrew; Gray, Scott

    2015-01-01

    Deep convolutional neural networks take GPU days of compute time to train on large data sets. Pedestrian detection for self driving cars requires very low latency. Image recognition for mobile phones is constrained by limited processing resources. The success of convolutional neural networks in these situations is limited by how fast we can compute them. Conventional FFT based convolution is fast for large filters, but state of the art convolutional neural networks use small, 3x3 filters. We ...

  11. Modelling Microwave Devices Using Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Andrius Katkevičius

    2012-04-01

    Full Text Available Artificial neural networks (ANN have recently gained attention as fast and flexible equipment for modelling and designing microwave devices. The paper reviews the opportunities to use them for undertaking the tasks on the analysis and synthesis. The article focuses on what tasks might be solved using neural networks, what challenges might rise when using artificial neural networks for carrying out tasks on microwave devices and discusses problem-solving techniques for microwave devices with intermittent characteristics.Article in Lithuanian

  12. Rule Extraction using Artificial Neural Networks

    OpenAIRE

    2010-01-01

    Artificial neural networks have been successfully applied to a variety of business application problems involving classification and regression. Although backpropagation neural networks generally predict better than decision trees do for pattern classification problems, they are often regarded as black boxes, i.e., their predictions are not as interpretable as those of decision trees. In many applications, it is desirable to extract knowledge from trained neural networks so that the users can...

  13. Adaptive optimization and control using neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Mead, W.C.; Brown, S.K.; Jones, R.D.; Bowling, P.S.; Barnes, C.W.

    1993-10-22

    Recent work has demonstrated the ability of neural-network-based controllers to optimize and control machines with complex, non-linear, relatively unknown control spaces. We present a brief overview of neural networks via a taxonomy illustrating some capabilities of different kinds of neural networks. We present some successful control examples, particularly the optimization and control of a small-angle negative ion source.

  14. Forecasting Exchange Rate Using Neural Networks

    OpenAIRE

    Raksaseree, Sukhita

    2009-01-01

    The artificial neural network models become increasingly popular among researchers and investors since many studies have shown that it has superior performance over the traditional statistical model. This paper aims to investigate the neural network performance in forecasting foreign exchange rates based on backpropagation algorithm. The forecast of Thai Baht against seven currencies are conducted to observe the performance of the neural network models using the performance criteria for both ...

  15. Semantic Interpretation of An Artificial Neural Network

    Science.gov (United States)

    1995-12-01

    ARTIFICIAL NEURAL NETWORK .7,’ THESIS Stanley Dale Kinderknecht Captain, USAF 770 DEAT7ET77,’H IR O C 7... ARTIFICIAL NEURAL NETWORK THESIS Stanley Dale Kinderknecht Captain, USAF AFIT/GCS/ENG/95D-07 Approved for public release; distribution unlimited The views...Government. AFIT/GCS/ENG/95D-07 SEMANTIC INTERPRETATION OF AN ARTIFICIAL NEURAL NETWORK THESIS Presented to the Faculty of the School of Engineering of

  16. Feature Weight Tuning for Recursive Neural Networks

    OpenAIRE

    2014-01-01

    This paper addresses how a recursive neural network model can automatically leave out useless information and emphasize important evidence, in other words, to perform "weight tuning" for higher-level representation acquisition. We propose two models, Weighted Neural Network (WNN) and Binary-Expectation Neural Network (BENN), which automatically control how much one specific unit contributes to the higher-level representation. The proposed model can be viewed as incorporating a more powerful c...

  17. Modelling hierarchical and modular complex networks: division and independence

    Science.gov (United States)

    Kim, D.-H.; Rodgers, G. J.; Kahng, B.; Kim, D.

    2005-06-01

    We introduce a growing network model which generates both modular and hierarchical structure in a self-organized way. To this end, we modify the Barabási-Albert model into the one evolving under the principles of division and independence as well as growth and preferential attachment (PA). A newly added vertex chooses one of the modules composed of existing vertices, and attaches edges to vertices belonging to that module following the PA rule. When the module size reaches a proper size, the module is divided into two, and a new module is created. The karate club network studied by Zachary is a simple version of the current model. We find that the model can reproduce both modular and hierarchical properties, characterized by the hierarchical clustering function of a vertex with degree k, C(k), being in good agreement with empirical measurements for real-world networks.

  18. Hierarchical Overlapping Clustering of Network Data Using Cut Metrics

    CERN Document Server

    Gama, Fernando; Ribeiro, Alejandro

    2016-01-01

    A novel method to obtain hierarchical and overlapping clusters from network data -i.e., a set of nodes endowed with pairwise dissimilarities- is presented. The introduced method is hierarchical in the sense that it outputs a nested collection of groupings of the node set depending on the resolution or degree of similarity desired, and it is overlapping since it allows nodes to belong to more than one group. Our construction is rooted on the facts that a hierarchical (non-overlapping) clustering of a network can be equivalently represented by a finite ultrametric space and that a convex combination of ultrametrics results in a cut metric. By applying a hierarchical (non-overlapping) clustering method to multiple dithered versions of a given network and then convexly combining the resulting ultrametrics, we obtain a cut metric associated to the network of interest. We then show how to extract a hierarchical overlapping clustering structure from the aforementioned cut metric. Furthermore, the so-called overlappi...

  19. Fuzzy neural network theory and application

    CERN Document Server

    Liu, Puyin

    2004-01-01

    This book systematically synthesizes research achievements in the field of fuzzy neural networks in recent years. It also provides a comprehensive presentation of the developments in fuzzy neural networks, with regard to theory as well as their application to system modeling and image restoration. Special emphasis is placed on the fundamental concepts and architecture analysis of fuzzy neural networks. The book is unique in treating all kinds of fuzzy neural networks and their learning algorithms and universal approximations, and employing simulation examples which are carefully designed to he

  20. Face recognition: a convolutional neural-network approach.

    Science.gov (United States)

    Lawrence, S; Giles, C L; Tsoi, A C; Back, A D

    1997-01-01

    We present a hybrid neural-network for human face recognition which compares favourably with other methods. The system combines local image sampling, a self-organizing map (SOM) neural network, and a convolutional neural network. The SOM provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the SOM, and a multilayer perceptron (MLP) in place of the convolutional network for comparison. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze the computational complexity and discuss how new classes could be added to the trained recognizer.

  1. Neural networks for nuclear spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Keller, P.E.; Kangas, L.J.; Hashem, S.; Kouzes, R.T. [Pacific Northwest Lab., Richland, WA (United States)] [and others

    1995-12-31

    In this paper two applications of artificial neural networks (ANNs) in nuclear spectroscopy analysis are discussed. In the first application, an ANN assigns quality coefficients to alpha particle energy spectra. These spectra are used to detect plutonium contamination in the work environment. The quality coefficients represent the levels of spectral degradation caused by miscalibration and foreign matter affecting the instruments. A set of spectra was labeled with quality coefficients by an expert and used to train the ANN expert system. Our investigation shows that the expert knowledge of spectral quality can be transferred to an ANN system. The second application combines a portable gamma-ray spectrometer with an ANN. In this system the ANN is used to automatically identify, radioactive isotopes in real-time from their gamma-ray spectra. Two neural network paradigms are examined: the linear perception and the optimal linear associative memory (OLAM). A comparison of the two paradigms shows that OLAM is superior to linear perception for this application. Both networks have a linear response and are useful in determining the composition of an unknown sample when the spectrum of the unknown is a linear superposition of known spectra. One feature of this technique is that it uses the whole spectrum in the identification process instead of only the individual photo-peaks. For this reason, it is potentially more useful for processing data from lower resolution gamma-ray spectrometers. This approach has been tested with data generated by Monte Carlo simulations and with field data from sodium iodide and Germanium detectors. With the ANN approach, the intense computation takes place during the training process. Once the network is trained, normal operation consists of propagating the data through the network, which results in rapid identification of samples. This approach is useful in situations that require fast response where precise quantification is less important.

  2. Neural Networks for Rapid Design and Analysis

    Science.gov (United States)

    Sparks, Dean W., Jr.; Maghami, Peiman G.

    1998-01-01

    Artificial neural networks have been employed for rapid and efficient dynamics and control analysis of flexible systems. Specifically, feedforward neural networks are designed to approximate nonlinear dynamic components over prescribed input ranges, and are used in simulations as a means to speed up the overall time response analysis process. To capture the recursive nature of dynamic components with artificial neural networks, recurrent networks, which use state feedback with the appropriate number of time delays, as inputs to the networks, are employed. Once properly trained, neural networks can give very good approximations to nonlinear dynamic components, and by their judicious use in simulations, allow the analyst the potential to speed up the analysis process considerably. To illustrate this potential speed up, an existing simulation model of a spacecraft reaction wheel system is executed, first conventionally, and then with an artificial neural network in place.

  3. Systolic implementation of neural networks

    Energy Technology Data Exchange (ETDEWEB)

    De Groot, A.J.; Parker, S.R.

    1989-01-01

    The backpropagation algorithm for error gradient calculations in multilayer, feed-forward neural networks is derived in matrix form involving inner and outer products. It is demonstrated that these calculations can be carried out efficiently using systolic processing techniques, particularly using the SPRINT, a 64-element systolic processor developed at Lawrence Livermore National Laboratory. This machine contains one million synapses, and forward-propagates 12 million connections per second, using 100 watts of power. When executing the algorithm, each SPRINT processor performs useful work 97% of the time. The theory and applications are confirmed by some nontrivial examples involving seismic signal recognition. 4 refs., 7 figs.

  4. Magnitude Sensitive Competitive Neural Networks

    OpenAIRE

    Pelayo Campillos, Enrique; Buldain Pérez, David; Orrite Uruñuela, Carlos

    2014-01-01

    En esta Tesis se presentan un conjunto de redes neuronales llamadas Magnitude Sensitive Competitive Neural Networks (MSCNNs). Se trata de un conjunto de algoritmos de Competitive Learning que incluyen un término de magnitud como un factor de modulación de la distancia usada en la competición. Al igual que otros métodos competitivos, MSCNNs realizan la cuantización vectorial de los datos, pero el término de magnitud guía el entrenamiento de los centroides de modo que se representan con alto de...

  5. Detect overlapping and hierarchical community structure in networks

    CERN Document Server

    Shen, Huawei; Cai, Kai; Hu, Mao-Bin

    2008-01-01

    Clustering and community structure is crucial for many network systems and the related dynamic processes. It has been shown that communities are usually overlapping and hierarchical. However, previous methods investigate these two properties of community structure separately. This paper propose an algorithm (EAGLE) to detect both the overlapping and hierarchical properties of complex community structure together. This algorithm deals with the set of maximal cliques and adopts an agglomerative framework. The quality function of modularity is extended to evaluate the goodness of a cover. The examples of application to real world networks give excellent results.

  6. Hierarchical Routing over Dynamic Wireless Networks

    CERN Document Server

    Tschopp, Dominique; Grossglauser, Matthias

    2009-01-01

    Wireless network topologies change over time and maintaining routes requires frequent updates. Updates are costly in terms of consuming throughput available for data transmission, which is precious in wireless networks. In this paper, we ask whether there exist low-overhead schemes that produce low-stretch routes. This is studied by using the underlying geometric properties of the connectivity graph in wireless networks.

  7. Do Convolutional Neural Networks Learn Class Hierarchy?

    Science.gov (United States)

    Alsallakh, Bilal; Jourabloo, Amin; Ye, Mao; Liu, Xiaoming; Ren, Liu

    2017-08-29

    Convolutional Neural Networks (CNNs) currently achieve state-of-the-art accuracy in image classification. With a growing number of classes, the accuracy usually drops as the possibilities of confusion increase. Interestingly, the class confusion patterns follow a hierarchical structure over the classes. We present visual-analytics methods to reveal and analyze this hierarchy of similar classes in relation with CNN-internal data. We found that this hierarchy not only dictates the confusion patterns between the classes, it furthermore dictates the learning behavior of CNNs. In particular, the early layers in these networks develop feature detectors that can separate high-level groups of classes quite well, even after a few training epochs. In contrast, the latter layers require substantially more epochs to develop specialized feature detectors that can separate individual classes. We demonstrate how these insights are key to significant improvement in accuracy by designing hierarchy-aware CNNs that accelerate model convergence and alleviate overfitting. We further demonstrate how our methods help in identifying various quality issues in the training data.

  8. The Laplacian spectrum of neural networks.

    Science.gov (United States)

    de Lange, Siemon C; de Reus, Marcel A; van den Heuvel, Martijn P

    2014-01-13

    The brain is a complex network of neural interactions, both at the microscopic and macroscopic level. Graph theory is well suited to examine the global network architecture of these neural networks. Many popular graph metrics, however, encode average properties of individual network elements. Complementing these "conventional" graph metrics, the eigenvalue spectrum of the normalized Laplacian describes a network's structure directly at a systems level, without referring to individual nodes or connections. In this paper, the Laplacian spectra of the macroscopic anatomical neuronal networks of the macaque and cat, and the microscopic network of the Caenorhabditis elegans were examined. Consistent with conventional graph metrics, analysis of the Laplacian spectra revealed an integrative community structure in neural brain networks. Extending previous findings of overlap of network attributes across species, similarity of the Laplacian spectra across the cat, macaque and C. elegans neural networks suggests a certain level of consistency in the overall architecture of the anatomical neural networks of these species. Our results further suggest a specific network class for neural networks, distinct from conceptual small-world and scale-free models as well as several empirical networks.

  9. The Laplacian spectrum of neural networks

    Science.gov (United States)

    de Lange, Siemon C.; de Reus, Marcel A.; van den Heuvel, Martijn P.

    2014-01-01

    The brain is a complex network of neural interactions, both at the microscopic and macroscopic level. Graph theory is well suited to examine the global network architecture of these neural networks. Many popular graph metrics, however, encode average properties of individual network elements. Complementing these “conventional” graph metrics, the eigenvalue spectrum of the normalized Laplacian describes a network's structure directly at a systems level, without referring to individual nodes or connections. In this paper, the Laplacian spectra of the macroscopic anatomical neuronal networks of the macaque and cat, and the microscopic network of the Caenorhabditis elegans were examined. Consistent with conventional graph metrics, analysis of the Laplacian spectra revealed an integrative community structure in neural brain networks. Extending previous findings of overlap of network attributes across species, similarity of the Laplacian spectra across the cat, macaque and C. elegans neural networks suggests a certain level of consistency in the overall architecture of the anatomical neural networks of these species. Our results further suggest a specific network class for neural networks, distinct from conceptual small-world and scale-free models as well as several empirical networks. PMID:24454286

  10. Tiresias: Online Anomaly Detection for Hierarchical Operational Network Data

    CERN Document Server

    Hong, Chi-Yao; Duffield, Nick; Wang, Jia

    2012-01-01

    Operational network data, management data such as customer care call logs and equipment system logs, is a very important source of information for network operators to detect problems in their networks. Unfortunately, there is lack of efficient tools to automatically track and detect anomalous events on operational data, causing ISP operators to rely on manual inspection of this data. While anomaly detection has been widely studied in the context of network data, operational data presents several new challenges, including the volatility and sparseness of data, and the need to perform fast detection (complicating application of schemes that require offline processing or large/stable data sets to converge). To address these challenges, we propose Tiresias, an automated approach to locating anomalous events on hierarchical operational data. Tiresias leverages the hierarchical structure of operational data to identify high-impact aggregates (e.g., locations in the network, failure modes) likely to be associated w...

  11. Neural Network Controlled Visual Saccades

    Science.gov (United States)

    Johnson, Jeffrey D.; Grogan, Timothy A.

    1989-03-01

    The paper to be presented will discuss research on a computer vision system controlled by a neural network capable of learning through classical (Pavlovian) conditioning. Through the use of unconditional stimuli (reward and punishment) the system will develop scan patterns of eye saccades necessary to differentiate and recognize members of an input set. By foveating only those portions of the input image that the system has found to be necessary for recognition the drawback of computational explosion as the size of the input image grows is avoided. The model incorporates many features found in animal vision systems, and is governed by understandable and modifiable behavior patterns similar to those reported by Pavlov in his classic study. These behavioral patterns are a result of a neuronal model, used in the network, explicitly designed to reproduce this behavior.

  12. Neural-network front ends in unsupervised learning.

    Science.gov (United States)

    Pedrycz, W; Waletzky, J

    1997-01-01

    Proposed is an idea of partial supervision realized in the form of a neural-network front end to the schemes of unsupervised learning (clustering). This neural network leads to an anisotropic nature of the induced feature space. The anisotropic property of the space provides us with some of its local deformation necessary to properly represent labeled data and enhance efficiency of the mechanisms of clustering to be exploited afterwards. The training of the network is completed based upon available labeled patterns-a referential form of the labeling gives rise to reinforcement learning. It is shown that the discussed approach is universal and can be utilized in conjunction with any clustering method. Experimental studies are concentrated on three main categories of unsupervised learning including FUZZY ISODATA, Kohonen self-organizing maps, and hierarchical clustering.

  13. Video Traffic Prediction Using Neural Networks

    Directory of Open Access Journals (Sweden)

    Miloš Oravec

    2008-10-01

    Full Text Available In this paper, we consider video stream prediction for application in services likevideo-on-demand, videoconferencing, video broadcasting, etc. The aim is to predict thevideo stream for an efficient bandwidth allocation of the video signal. Efficient predictionof traffic generated by multimedia sources is an important part of traffic and congestioncontrol procedures at the network edges. As a tool for the prediction, we use neuralnetworks – multilayer perceptron (MLP, radial basis function networks (RBF networksand backpropagation through time (BPTT neural networks. At first, we briefly introducetheoretical background of neural networks, the prediction methods and the differencebetween them. We propose also video time-series processing using moving averages.Simulation results for each type of neural network together with final comparisons arepresented. For comparison purposes, also conventional (non-neural prediction isincluded. The purpose of our work is to construct suitable neural networks for variable bitrate video prediction and evaluate them. We use video traces from [1].

  14. Neural networks with discontinuous/impact activations

    CERN Document Server

    Akhmet, Marat

    2014-01-01

    This book presents as its main subject new models in mathematical neuroscience. A wide range of neural networks models with discontinuities are discussed, including impulsive differential equations, differential equations with piecewise constant arguments, and models of mixed type. These models involve discontinuities, which are natural because huge velocities and short distances are usually observed in devices modeling the networks. A discussion of the models, appropriate for the proposed applications, is also provided. This book also: Explores questions related to the biological underpinning for models of neural networks\\ Considers neural networks modeling using differential equations with impulsive and piecewise constant argument discontinuities Provides all necessary mathematical basics for application to the theory of neural networks Neural Networks with Discontinuous/Impact Activations is an ideal book for researchers and professionals in the field of engineering mathematics that have an interest in app...

  15. Neural Networks for Emotion Classification

    CERN Document Server

    Sun, Yafei

    2011-01-01

    It is argued that for the computer to be able to interact with humans, it needs to have the communication skills of humans. One of these skills is the ability to understand the emotional state of the person. This thesis describes a neural network-based approach for emotion classification. We learn a classifier that can recognize six basic emotions with an average accuracy of 77% over the Cohn-Kanade database. The novelty of this work is that instead of empirically selecting the parameters of the neural network, i.e. the learning rate, activation function parameter, momentum number, the number of nodes in one layer, etc. we developed a strategy that can automatically select comparatively better combination of these parameters. We also introduce another way to perform back propagation. Instead of using the partial differential of the error function, we use optimal algorithm; namely Powell's direction set to minimize the error function. We were also interested in construction an authentic emotion databases. This...

  16. Artificial neural networks in neurosurgery.

    Science.gov (United States)

    Azimi, Parisa; Mohammadi, Hasan Reza; Benzel, Edward C; Shahzadi, Sohrab; Azhari, Shirzad; Montazeri, Ali

    2015-03-01

    Artificial neural networks (ANNs) effectively analyze non-linear data sets. The aimed was A review of the relevant published articles that focused on the application of ANNs as a tool for assisting clinical decision-making in neurosurgery. A literature review of all full publications in English biomedical journals (1993-2013) was undertaken. The strategy included a combination of key words 'artificial neural networks', 'prognostic', 'brain', 'tumor tracking', 'head', 'tumor', 'spine', 'classification' and 'back pain' in the title and abstract of the manuscripts using the PubMed search engine. The major findings are summarized, with a focus on the application of ANNs for diagnostic and prognostic purposes. Finally, the future of ANNs in neurosurgery is explored. A total of 1093 citations were identified and screened. In all, 57 citations were found to be relevant. Of these, 50 articles were eligible for inclusion in this review. The synthesis of the data showed several applications of ANN in neurosurgery, including: (1) diagnosis and assessment of disease progression in low back pain, brain tumours and primary epilepsy; (2) enhancing clinically relevant information extraction from radiographic images, intracranial pressure processing, low back pain and real-time tumour tracking; (3) outcome prediction in epilepsy, brain metastases, lumbar spinal stenosis, lumbar disc herniation, childhood hydrocephalus, trauma mortality, and the occurrence of symptomatic cerebral vasospasm in patients with aneurysmal subarachnoid haemorrhage; (4) the use in the biomechanical assessments of spinal disease. ANNs can be effectively employed for diagnosis, prognosis and outcome prediction in neurosurgery.

  17. Optimizing neural network forecast by immune algorithm

    Institute of Scientific and Technical Information of China (English)

    YANG Shu-xia; LI Xiang; LI Ning; YANG Shang-dong

    2006-01-01

    Considering multi-factor influence, a forecasting model was built. The structure of BP neural network was designed, and immune algorithm was applied to optimize its network structure and weight. After training the data of power demand from the year 1980 to 2005 in China, a nonlinear network model was obtained on the relationship between power demand and the factors which had impacts on it, and thus the above proposed method was verified. Meanwhile, the results were compared to those of neural network optimized by genetic algorithm. The results show that this method is superior to neural network optimized by genetic algorithm and is one of the effective ways of time series forecast.

  18. Optimising the topology of complex neural networks

    CERN Document Server

    Jiang, Fei; Schoenauer, Marc

    2007-01-01

    In this paper, we study instances of complex neural networks, i.e. neural netwo rks with complex topologies. We use Self-Organizing Map neural networks whose n eighbourhood relationships are defined by a complex network, to classify handwr itten digits. We show that topology has a small impact on performance and robus tness to neuron failures, at least at long learning times. Performance may howe ver be increased (by almost 10%) by artificial evolution of the network topo logy. In our experimental conditions, the evolved networks are more random than their parents, but display a more heterogeneous degree distribution.

  19. A new formulation for feedforward neural networks.

    Science.gov (United States)

    Razavi, Saman; Tolson, Bryan A

    2011-10-01

    Feedforward neural network is one of the most commonly used function approximation techniques and has been applied to a wide variety of problems arising from various disciplines. However, neural networks are black-box models having multiple challenges/difficulties associated with training and generalization. This paper initially looks into the internal behavior of neural networks and develops a detailed interpretation of the neural network functional geometry. Based on this geometrical interpretation, a new set of variables describing neural networks is proposed as a more effective and geometrically interpretable alternative to the traditional set of network weights and biases. Then, this paper develops a new formulation for neural networks with respect to the newly defined variables; this reformulated neural network (ReNN) is equivalent to the common feedforward neural network but has a less complex error response surface. To demonstrate the learning ability of ReNN, in this paper, two training methods involving a derivative-based (a variation of backpropagation) and a derivative-free optimization algorithms are employed. Moreover, a new measure of regularization on the basis of the developed geometrical interpretation is proposed to evaluate and improve the generalization ability of neural networks. The value of the proposed geometrical interpretation, the ReNN approach, and the new regularization measure are demonstrated across multiple test problems. Results show that ReNN can be trained more effectively and efficiently compared to the common neural networks and the proposed regularization measure is an effective indicator of how a network would perform in terms of generalization.

  20. Drift chamber tracking with neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Lindsey, C.S.; Denby, B.; Haggerty, H.

    1992-10-01

    We discuss drift chamber tracking with a commercial log VLSI neural network chip. Voltages proportional to the drift times in a 4-layer drift chamber were presented to the Intel ETANN chip. The network was trained to provide the intercept and slope of straight tracks traversing the chamber. The outputs were recorded and later compared off line to conventional track fits. Two types of network architectures were studied. Applications of neural network tracking to high energy physics detector triggers is discussed.

  1. Hierarchicality of trade flow networks reveals complexity of products.

    Science.gov (United States)

    Shi, Peiteng; Zhang, Jiang; Yang, Bo; Luo, Jingfei

    2014-01-01

    With globalization, countries are more connected than before by trading flows, which amounts to at least 36 trillion dollars today. Interestingly, around 30-60 percents of exports consist of intermediate products in global. Therefore, the trade flow network of particular product with high added values can be regarded as value chains. The problem is weather we can discriminate between these products from their unique flow network structure? This paper applies the flow analysis method developed in ecology to 638 trading flow networks of different products. We claim that the allometric scaling exponent η can be used to characterize the degree of hierarchicality of a flow network, i.e., whether the trading products flow on long hierarchical chains. Then, it is pointed out that the flow networks of products with higher added values and complexity like machinary, transport equipment etc. have larger exponents, meaning that their trade flow networks are more hierarchical. As a result, without the extra data like global input-output table, we can identify the product categories with higher complexity, and the relative importance of a country in the global value chain by the trading network solely.

  2. Hierarchicality of trade flow networks reveals complexity of products.

    Directory of Open Access Journals (Sweden)

    Peiteng Shi

    Full Text Available With globalization, countries are more connected than before by trading flows, which amounts to at least 36 trillion dollars today. Interestingly, around 30-60 percents of exports consist of intermediate products in global. Therefore, the trade flow network of particular product with high added values can be regarded as value chains. The problem is weather we can discriminate between these products from their unique flow network structure? This paper applies the flow analysis method developed in ecology to 638 trading flow networks of different products. We claim that the allometric scaling exponent η can be used to characterize the degree of hierarchicality of a flow network, i.e., whether the trading products flow on long hierarchical chains. Then, it is pointed out that the flow networks of products with higher added values and complexity like machinary, transport equipment etc. have larger exponents, meaning that their trade flow networks are more hierarchical. As a result, without the extra data like global input-output table, we can identify the product categories with higher complexity, and the relative importance of a country in the global value chain by the trading network solely.

  3. Coherence resonance in bursting neural networks.

    Science.gov (United States)

    Kim, June Hoan; Lee, Ho Jun; Min, Cheol Hong; Lee, Kyoung J

    2015-10-01

    Synchronized neural bursts are one of the most noticeable dynamic features of neural networks, being essential for various phenomena in neuroscience, yet their complex dynamics are not well understood. With extrinsic electrical and optical manipulations on cultured neural networks, we demonstrate that the regularity (or randomness) of burst sequences is in many cases determined by a (few) low-dimensional attractor(s) working under strong neural noise. Moreover, there is an optimal level of noise strength at which the regularity of the interburst interval sequence becomes maximal-a phenomenon of coherence resonance. The experimental observations are successfully reproduced through computer simulations on a well-established neural network model, suggesting that the same phenomena may occur in many in vivo as well as in vitro neural networks.

  4. A Multiobjective Sparse Feature Learning Model for Deep Neural Networks.

    Science.gov (United States)

    Gong, Maoguo; Liu, Jia; Li, Hao; Cai, Qing; Su, Linzhi

    2015-12-01

    Hierarchical deep neural networks are currently popular learning models for imitating the hierarchical architecture of human brain. Single-layer feature extractors are the bricks to build deep networks. Sparse feature learning models are popular models that can learn useful representations. But most of those models need a user-defined constant to control the sparsity of representations. In this paper, we propose a multiobjective sparse feature learning model based on the autoencoder. The parameters of the model are learnt by optimizing two objectives, reconstruction error and the sparsity of hidden units simultaneously to find a reasonable compromise between them automatically. We design a multiobjective induced learning procedure for this model based on a multiobjective evolutionary algorithm. In the experiments, we demonstrate that the learning procedure is effective, and the proposed multiobjective model can learn useful sparse features.

  5. Hierarchical document categorization using associative networks

    NARCIS (Netherlands)

    Bloom, Niels; Theune, Mariet; de Jong, Franciska M.G.; Klement, E.P.; Borutzky, W.; Fahringer, T.; Hamza, M.H.; Uskov, V.

    Associative networks are a connectionist language model with the ability to handle dynamic data. We used two associative networks to categorize random sets of related Wikipedia articles with only their raw text as input. We then compared the resulting categorization to a gold standard: the manual

  6. Joint stiffness identification of body structure using neural network. Jointed part composed of 2 beams; Neural network ni yoru shatai kozo no ketsugo gosei dotei. Buzai 2 hon kara naru ketsugobu no baai

    Energy Technology Data Exchange (ETDEWEB)

    Okabe, A.; Tomioka, N. [Nihon University, Tokyo (Japan)

    1997-10-01

    The method to obtain a joint stiffness value from displacements of jointed part using hierarchical neural networks in case of a jointed part composed of two beams were proposed. First, the sample data of displacements of jointed part vs. joint stiffness are prepared as learned data. Second, the relations between displacements of jointed part and joint stiffness are constructed from these learned data using a hierarchical neural networks. It was found that the value of joint stiffness can be obtained from displacement of jointed part by the trained neural network. 4 refs., 9 figs., 2 tabs.

  7. Changes of hierarchical network in local and world stock market

    Science.gov (United States)

    Patwary, Enayet Ullah; Lee, Jong Youl; Nobi, Ashadun; Kim, Doo Hwan; Lee, Jae Woo

    2017-10-01

    We consider the cross-correlation coefficients of the daily returns in the local and global stock markets. We generate the minimal spanning tree (MST) using the correlation matrix. We observe that the MSTs change their structure from chain-like networks to star-like networks during periods of market uncertainty. We quantify the measure of the hierarchical network utilizing the value of the hierarchy measured by the hierarchical path. The hierarchy and betweenness centrality characterize the state of the market regarding the impact of crises. During crises, the non-financial company is established as the central node of the MST. However, before the crisis and during stable periods, the financial company is occupying the central node of the MST in the Korean and the U.S. stock markets. The changes in the network structure and the central node are good indicators of an upcoming crisis.

  8. Neural network classification - A Bayesian interpretation

    Science.gov (United States)

    Wan, Eric A.

    1990-01-01

    The relationship between minimizing a mean squared error and finding the optimal Bayesian classifier is reviewed. This provides a theoretical interpretation for the process by which neural networks are used in classification. A number of confidence measures are proposed to evaluate the performance of the neural network classifier within a statistical framework.

  9. Adaptive Neurons For Artificial Neural Networks

    Science.gov (United States)

    Tawel, Raoul

    1990-01-01

    Training time decreases dramatically. In improved mathematical model of neural-network processor, temperature of neurons (in addition to connection strengths, also called weights, of synapses) varied during supervised-learning phase of operation according to mathematical formalism and not heuristic rule. Evidence that biological neural networks also process information at neuronal level.

  10. Isolated Speech Recognition Using Artificial Neural Networks

    Science.gov (United States)

    2007-11-02

    In this project Artificial Neural Networks are used as research tool to accomplish Automated Speech Recognition of normal speech. A small size...the first stage of this work are satisfactory and thus the application of artificial neural networks in conjunction with cepstral analysis in isolated word recognition holds promise.

  11. Neural Network Algorithm for Particle Loading

    Energy Technology Data Exchange (ETDEWEB)

    J. L. V. Lewandowski

    2003-04-25

    An artificial neural network algorithm for continuous minimization is developed and applied to the case of numerical particle loading. It is shown that higher-order moments of the probability distribution function can be efficiently renormalized using this technique. A general neural network for the renormalization of an arbitrary number of moments is given.

  12. Medical image analysis with artificial neural networks.

    Science.gov (United States)

    Jiang, J; Trundle, P; Ren, J

    2010-12-01

    Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging. Copyright © 2010 Elsevier Ltd. All rights reserved.

  13. Creativity in design and artificial neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Neocleous, C.C.; Esat, I.I. [Brunel Univ. Uxbridge (United Kingdom); Schizas, C.N. [Univ. of Cyprus, Nicosia (Cyprus)

    1996-12-31

    The creativity phase is identified as an integral part of the design phase. The characteristics of creative persons which are relevant to designing artificial neural networks manifesting aspects of creativity, are identified. Based on these identifications, a general framework of artificial neural network characteristics to implement such a goal are proposed.

  14. Neural Networks for Non-linear Control

    DEFF Research Database (Denmark)

    Sørensen, O.

    1994-01-01

    This paper describes how a neural network, structured as a Multi Layer Perceptron, is trained to predict, simulate and control a non-linear process.......This paper describes how a neural network, structured as a Multi Layer Perceptron, is trained to predict, simulate and control a non-linear process....

  15. Application of Neural Networks for Energy Reconstruction

    CERN Document Server

    Damgov, Jordan

    2002-01-01

    The possibility to use Neural Networks for reconstruction ofthe energy deposited in the calorimetry system of the CMS detector is investigated. It is shown that using feed-forward neural network, good linearity, Gaussian energy distribution and good energy resolution can be achieved. Significant improvement of the energy resolution and linearity is reached in comparison with other weighting methods for energy reconstruction.

  16. Neural Networks for Non-linear Control

    DEFF Research Database (Denmark)

    Sørensen, O.

    1994-01-01

    This paper describes how a neural network, structured as a Multi Layer Perceptron, is trained to predict, simulate and control a non-linear process.......This paper describes how a neural network, structured as a Multi Layer Perceptron, is trained to predict, simulate and control a non-linear process....

  17. Information Sharing During Crisis Management in Hierarchical vs. Network Teams

    NARCIS (Netherlands)

    Schraagen, J.M.C.; Veld, M.H.I.T.; Koning, L. de

    2010-01-01

    This study examines the differences between hierarchical and network teams in emergency management. A controlled experimental environment was created in which we could study teams that differed in decision rights, availability of information, information sharing, and task division. Thirty-two teams

  18. Introduction to Concepts in Artificial Neural Networks

    Science.gov (United States)

    Niebur, Dagmar

    1995-01-01

    This introduction to artificial neural networks summarizes some basic concepts of computational neuroscience and the resulting models of artificial neurons. The terminology of biological and artificial neurons, biological and machine learning and neural processing is introduced. The concepts of supervised and unsupervised learning are explained with examples from the power system area. Finally, a taxonomy of different types of neurons and different classes of artificial neural networks is presented.

  19. DATA CLASSIFICATION WITH NEURAL CLASSIFIER USING RADIAL BASIS FUNCTION WITH DATA REDUCTION USING HIERARCHICAL CLUSTERING

    Directory of Open Access Journals (Sweden)

    M. Safish Mary

    2012-04-01

    Full Text Available Classification of large amount of data is a time consuming process but crucial for analysis and decision making. Radial Basis Function networks are widely used for classification and regression analysis. In this paper, we have studied the performance of RBF neural networks to classify the sales of cars based on the demand, using kernel density estimation algorithm which produces classification accuracy comparable to data classification accuracy provided by support vector machines. In this paper, we have proposed a new instance based data selection method where redundant instances are removed with help of a threshold thus improving the time complexity with improved classification accuracy. The instance based selection of the data set will help reduce the number of clusters formed thereby reduces the number of centers considered for building the RBF network. Further the efficiency of the training is improved by applying a hierarchical clustering technique to reduce the number of clusters formed at every step. The paper explains the algorithm used for classification and for conditioning the data. It also explains the complexities involved in classification of sales data for analysis and decision-making.

  20. Rule Extraction using Artificial Neural Networks

    CERN Document Server

    Kamruzzaman, S M

    2010-01-01

    Artificial neural networks have been successfully applied to a variety of business application problems involving classification and regression. Although backpropagation neural networks generally predict better than decision trees do for pattern classification problems, they are often regarded as black boxes, i.e., their predictions are not as interpretable as those of decision trees. In many applications, it is desirable to extract knowledge from trained neural networks so that the users can gain a better understanding of the solution. This paper presents an efficient algorithm to extract rules from artificial neural networks. We use two-phase training algorithm for backpropagation learning. In the first phase, the number of hidden nodes of the network is determined automatically in a constructive fashion by adding nodes one after another based on the performance of the network on training data. In the second phase, the number of relevant input units of the network is determined using pruning algorithm. The ...

  1. Self-organized Criticality in Hierarchical Brain Network

    Institute of Scientific and Technical Information of China (English)

    YANG Qiu-Ying; ZHANG Ying-Yue; CHEN Tian-Lun

    2008-01-01

    It is shown that the cortical brain network of the macaque displays a hierarchically clustered organization and the neuron network shows small-world properties. Now the two factors will be considered in our model and the dynamical behavior of the model will be studied. We study the characters of the model and find that the distribution of avalanche size of the model follows power-law behavior.

  2. An Extended Hierarchical Trusted Model for Wireless Sensor Networks

    Institute of Scientific and Technical Information of China (English)

    DU Ruiying; XU Mingdi; ZHANG Huanguo

    2006-01-01

    Cryptography and authentication are traditional approach for providing network security. However, they are not sufficient for solving the problems which malicious nodes compromise whole wireless sensor network leading to invalid data transmission and wasting resource by using vicious behaviors. This paper puts forward an extended hierarchical trusted architecture for wireless sensor network, and establishes trusted congregations by three-tier framework. The method combines statistics, economics with encrypt mechanism for developing two trusted models which evaluate cluster head nodes and common sensor nodes respectively. The models form logical trusted-link from command node to common sensor nodes and guarantees the network can run in secure and reliable circumstance.

  3. Integrative activity of neural networks may code virtual spaces with internal representations.

    Science.gov (United States)

    Strelnikov, Kuzma

    2014-10-01

    It was shown recently in neuroimaging that spatial differentiation of brain activity provides novel information about brain function. This confirms the integrative organisation of brain activity, but given present technical limitations of neuroimaging approaches, the exact role of integrative activity remains unclear. We trained a neural network to integrate information using random numbers so as to imitate the "centre-periphery" pattern of brain activity in neuroimaging. Only the hierarchical organisation of the network permitted the learning of fast and reliable integration. We presented images to the trained network and, by spatial differentiation of the network activity, obtained virtual spaces with the presented images. Thus, our study established the necessity of the hierarchical organisation of neural networks for integration and demonstrated that the role of neural integration in the brain may be to create virtual spaces with internal representations of the objects.

  4. International Conference on Artificial Neural Networks (ICANN)

    CERN Document Server

    Mladenov, Valeri; Kasabov, Nikola; Artificial Neural Networks : Methods and Applications in Bio-/Neuroinformatics

    2015-01-01

    The book reports on the latest theories on artificial neural networks, with a special emphasis on bio-neuroinformatics methods. It includes twenty-three papers selected from among the best contributions on bio-neuroinformatics-related issues, which were presented at the International Conference on Artificial Neural Networks, held in Sofia, Bulgaria, on September 10-13, 2013 (ICANN 2013). The book covers a broad range of topics concerning the theory and applications of artificial neural networks, including recurrent neural networks, super-Turing computation and reservoir computing, double-layer vector perceptrons, nonnegative matrix factorization, bio-inspired models of cell communities, Gestalt laws, embodied theory of language understanding, saccadic gaze shifts and memory formation, and new training algorithms for Deep Boltzmann Machines, as well as dynamic neural networks and kernel machines. It also reports on new approaches to reinforcement learning, optimal control of discrete time-delay systems, new al...

  5. Hierarchical Ring Network Design Using Branch-and-Price

    DEFF Research Database (Denmark)

    Thomadsen, Tommy; Stidsen, Thomas K.

    2005-01-01

    We consider the problem of designing hierarchical two layer ring networks. The top layer consists of a federal-ring which establishes connection between a number of node disjoint metro-rings in a bottom layer. The objective is to minimize the costs of links in the network, taking both the fixed l...... for jointly solving the clustering problem, the metro ring design problem and the routing problem. Computational results are given for networks with up to 36 nodes.......We consider the problem of designing hierarchical two layer ring networks. The top layer consists of a federal-ring which establishes connection between a number of node disjoint metro-rings in a bottom layer. The objective is to minimize the costs of links in the network, taking both the fixed...... link establishment costs and the link capacity costs into account. Hierarchical ring network design problems combines the following optimization problems: Clustering, hub selection, metro ring design, federal ring design and routing problems. In this paper a branch-and-price algorithm is presented...

  6. Perspective: network-guided pattern formation of neural dynamics.

    Science.gov (United States)

    Hütt, Marc-Thorsten; Kaiser, Marcus; Hilgetag, Claus C

    2014-10-05

    The understanding of neural activity patterns is fundamentally linked to an understanding of how the brain's network architecture shapes dynamical processes. Established approaches rely mostly on deviations of a given network from certain classes of random graphs. Hypotheses about the supposed role of prominent topological features (for instance, the roles of modularity, network motifs or hierarchical network organization) are derived from these deviations. An alternative strategy could be to study deviations of network architectures from regular graphs (rings and lattices) and consider the implications of such deviations for self-organized dynamic patterns on the network. Following this strategy, we draw on the theory of spatio-temporal pattern formation and propose a novel perspective for analysing dynamics on networks, by evaluating how the self-organized dynamics are confined by network architecture to a small set of permissible collective states. In particular, we discuss the role of prominent topological features of brain connectivity, such as hubs, modules and hierarchy, in shaping activity patterns. We illustrate the notion of network-guided pattern formation with numerical simulations and outline how it can facilitate the understanding of neural dynamics. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  7. Wavelet Neural Networks for Adaptive Equalization

    Institute of Scientific and Technical Information of China (English)

    JIANGMinghu; DENGBeixing; GIELENGeorges; ZHANGBo

    2003-01-01

    A structure based on the Wavelet neural networks (WNNs) is proposed for nonlinear channel equalization in a digital communication system. The construction algorithm of the Minimum error probability (MEP) is presented and applied as a performance criterion to update the parameter matrix of wavelet networks. Our experimental results show that performance of the proposed wavelet networks based on equalizer can significantly improve the neural modeling accuracy, perform quite well in compensating the nonlinear distortion introduced by the channel, and outperform the conventional neural networks in signal to noise ratio and channel non-llnearity.

  8. Subspace learning of neural networks

    CERN Document Server

    Cheng Lv, Jian; Zhou, Jiliu

    2010-01-01

    PrefaceChapter 1. Introduction1.1 Introduction1.1.1 Linear Neural Networks1.1.2 Subspace Learning1.2 Subspace Learning Algorithms1.2.1 PCA Learning Algorithms1.2.2 MCA Learning Algorithms1.2.3 ICA Learning Algorithms1.3 Methods for Convergence Analysis1.3.1 SDT Method1.3.2 DCT Method1.3.3 DDT Method1.4 Block Algorithms1.5 Simulation Data Set and Notation1.6 ConclusionsChapter 2. PCA Learning Algorithms with Constants Learning Rates2.1 Oja's PCA Learning Algorithms2.1.1 The Algorithms2.1.2 Convergence Issue2.2 Invariant Sets2.2.1 Properties of Invariant Sets2.2.2 Conditions for Invariant Sets2.

  9. Neural networks for damage identification

    Energy Technology Data Exchange (ETDEWEB)

    Paez, T.L.; Klenke, S.E.

    1997-11-01

    Efforts to optimize the design of mechanical systems for preestablished use environments and to extend the durations of use cycles establish a need for in-service health monitoring. Numerous studies have proposed measures of structural response for the identification of structural damage, but few have suggested systematic techniques to guide the decision as to whether or not damage has occurred based on real data. Such techniques are necessary because in field applications the environments in which systems operate and the measurements that characterize system behavior are random. This paper investigates the use of artificial neural networks (ANNs) to identify damage in mechanical systems. Two probabilistic neural networks (PNNs) are developed and used to judge whether or not damage has occurred in a specific mechanical system, based on experimental measurements. The first PNN is a classical type that casts Bayesian decision analysis into an ANN framework; it uses exemplars measured from the undamaged and damaged system to establish whether system response measurements of unknown origin come from the former class (undamaged) or the latter class (damaged). The second PNN establishes the character of the undamaged system in terms of a kernel density estimator of measures of system response; when presented with system response measures of unknown origin, it makes a probabilistic judgment whether or not the data come from the undamaged population. The physical system used to carry out the experiments is an aerospace system component, and the environment used to excite the system is a stationary random vibration. The results of damage identification experiments are presented along with conclusions rating the effectiveness of the approaches.

  10. Design of Hierarchical Ring Networks Using Branch-and-Price

    DEFF Research Database (Denmark)

    Thomadsen, Tommy; Stidsen, Thomas K.

    2004-01-01

    We consider the problem of designing hierarchical two layer ring networks. The top layer consists of a federal-ring which establishes connection between a number of node disjoint metro-rings in a bottom layer. The objective is to minimize the costs of links in the network, taking both the fixed...... link establishment costs and the link capacity costs into account. The hierarchical two layer ring network design problem is solved in two stages: First the bottom layer, i.e. the metro-rings are designed, implicitly taking into account the capacity cost of the federal-ring. Then the federal......-ring is designed connecting the metro-rings, minimizing fixed link establishment costs of the federal-ring. A branch-and-price algorithm is presented for the design of the bottom layer and it is suggested that existing methods are used for the design of the federal-ring. Computational results are given...

  11. Strongly Resilient Non-Interactive Key Predistribution For Hierarchical Networks

    CERN Document Server

    Chen, Hao

    2010-01-01

    Key establishment is the basic necessary tool in the network security, by which pairs in the network can establish shared keys for protecting their pairwise communications. There have been some key agreement or predistribution schemes with the property that the key can be established without the interaction (\\cite{Blom84,BSHKY92,S97}). Recently the hierarchical cryptography and the key management for hierarchical networks have been active topics(see \\cite{BBG05,GHKRRW08,GS02,HNZI02,HL02,Matt04}. ). Key agreement schemes for hierarchical networks were presented in \\cite{Matt04,GHKRRW08} which is based on the Blom key predistribution scheme(Blom KPS, [1]) and pairing. In this paper we introduce generalized Blom-Blundo et al key predistribution schemes. These generalized Blom-Blundo et al key predistribution schemes have the same security functionality as the Blom-Blundo et al KPS. However different and random these KPSs can be used for various parts of the networks for enhancing the resilience. We also presentk...

  12. Convolutional Neural Networks Applied to House Numbers Digit Classification

    CERN Document Server

    Sermanet, Pierre; LeCun, Yann

    2012-01-01

    We classify digits of real-world house numbers using convolutional neural networks (ConvNets). ConvNets are hierarchical feature learning neural networks whose structure is biologically inspired. Unlike many popular vision approaches that are hand-designed, ConvNets can automatically learn a unique set of features optimized for a given task. We augmented the traditional ConvNet architecture by learning multi-stage features and by using Lp pooling and establish a new state-of-the-art of 94.85% accuracy on the SVHN dataset (45.2% error improvement). Furthermore, we analyze the benefits of different pooling methods and multi-stage features in ConvNets. The source code and a tutorial are available at eblearn.sf.net.

  13. Glaucoma detection based on deep convolutional neural network.

    Science.gov (United States)

    Xiangyu Chen; Yanwu Xu; Damon Wing Kee Wong; Tien Yin Wong; Jiang Liu

    2015-08-01

    Glaucoma is a chronic and irreversible eye disease, which leads to deterioration in vision and quality of life. In this paper, we develop a deep learning (DL) architecture with convolutional neural network for automated glaucoma diagnosis. Deep learning systems, such as convolutional neural networks (CNNs), can infer a hierarchical representation of images to discriminate between glaucoma and non-glaucoma patterns for diagnostic decisions. The proposed DL architecture contains six learned layers: four convolutional layers and two fully-connected layers. Dropout and data augmentation strategies are adopted to further boost the performance of glaucoma diagnosis. Extensive experiments are performed on the ORIGA and SCES datasets. The results show area under curve (AUC) of the receiver operating characteristic curve in glaucoma detection at 0.831 and 0.887 in the two databases, much better than state-of-the-art algorithms. The method could be used for glaucoma detection.

  14. Nonlinear programming with feedforward neural networks.

    Energy Technology Data Exchange (ETDEWEB)

    Reifman, J.

    1999-06-02

    We provide a practical and effective method for solving constrained optimization problems by successively training a multilayer feedforward neural network in a coupled neural-network/objective-function representation. Nonlinear programming problems are easily mapped into this representation which has a simpler and more transparent method of solution than optimization performed with Hopfield-like networks and poses very mild requirements on the functions appearing in the problem. Simulation results are illustrated and compared with an off-the-shelf optimization tool.

  15. Learning Processes of Layered Neural Networks

    OpenAIRE

    Fujiki, Sumiyoshi; FUJIKI, Nahomi, M.

    1995-01-01

    A positive reinforcement type learning algorithm is formulated for a stochastic feed-forward neural network, and a learning equation similar to that of the Boltzmann machine algorithm is obtained. By applying a mean field approximation to the same stochastic feed-forward neural network, a deterministic analog feed-forward network is obtained and the back-propagation learning rule is re-derived.

  16. Learning Algorithms of Multilayer Neural Networks

    OpenAIRE

    Fujiki, Sumiyoshi; FUJIKI, Nahomi, M.

    1996-01-01

    A positive reinforcement type learning algorithm is formulated for a stochastic feed-forward multilayer neural network, with far interlayer synaptic connections, and we obtain a learning rule similar to that of the Boltzmann machine on the same multilayer structure. By applying a mean field approximation to the stochastic feed-forward neural network, the generalized error back-propagation learning rule is derived for a deterministic analog feed-forward multilayer network with the far interlay...

  17. Hierarchical Compressed Sensing for Cluster Based Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Vishal Krishna Singh

    2016-02-01

    Full Text Available Data transmission consumes significant amount of energy in large scale wireless sensor networks (WSNs. In such an environment, reducing the in-network communication and distributing the load evenly over the network can reduce the overall energy consumption and maximize the network lifetime significantly. In this work, the aforementioned problem of network lifetime and uneven energy consumption in large scale wireless sensor networks is addressed. This work proposes a hierarchical compressed sensing (HCS scheme to reduce the in-network communication during the data gathering process. Co-related sensor readings are collected via a hierarchical clustering scheme. A compressed sensing (CS based data processing scheme is devised to transmit the data from the source to the sink. The proposed HCS is able to identify the optimal position for the application of CS to achieve reduced and similar number of transmissions on all the nodes in the network. An activity map is generated to validate the reduced and uniformly distributed communication load of the WSN. Based on the number of transmissions per data gathering round, the bit-hop metric model is used to analyse the overall energy consumption. Simulation results validate the efficiency of the proposed method over the existing CS based approaches.

  18. Research of The Deeper Neural Networks

    Directory of Open Access Journals (Sweden)

    Xiao You Rong

    2016-01-01

    Full Text Available Neural networks (NNs have powerful computational abilities and could be used in a variety of applications; however, training these networks is still a difficult problem. With different network structures, many neural models have been constructed. In this report, a deeper neural networks (DNNs architecture is proposed. The training algorithm of deeper neural network insides searching the global optimal point in the actual error surface. Before the training algorithm is designed, the error surface of the deeper neural network is analyzed from simple to complicated, and the features of the error surface is obtained. Based on these characters, the initialization method and training algorithm of DNNs is designed. For the initialization, a block-uniform design method is proposed which separates the error surface into some blocks and finds the optimal block using the uniform design method. For the training algorithm, the improved gradient-descent method is proposed which adds a penalty term into the cost function of the old gradient descent method. This algorithm makes the network have a great approximating ability and keeps the network state stable. All of these improve the practicality of the neural network.

  19. Time Synchronization in Hierarchical TESLA Wireless Sensor Networks

    Energy Technology Data Exchange (ETDEWEB)

    Jason L. Wright; Milos Manic

    2009-08-01

    Time synchronization and event time correlation are important in wireless sensor networks. In particular, time is used to create a sequence events or time line to answer questions of cause and effect. Time is also used as a basis for determining the freshness of received packets and the validity of cryptographic certificates. This paper presents secure method of time synchronization and event time correlation for TESLA-based hierarchical wireless sensor networks. The method demonstrates that events in a TESLA network can be accurately timestamped by adding only a few pieces of data to the existing protocol.

  20. Acute appendicitis diagnosis using artificial neural networks.

    Science.gov (United States)

    Park, Sung Yun; Kim, Sung Min

    2015-01-01

    Artificial neural networks is one of pattern analyzer method which are rapidly applied on a bio-medical field. The aim of this research was to propose an appendicitis diagnosis system using artificial neural networks (ANNs). Data from 801 patients of the university hospital in Dongguk were used to construct artificial neural networks for diagnosing appendicitis and acute appendicitis. A radial basis function neural network structure (RBF), a multilayer neural network structure (MLNN), and a probabilistic neural network structure (PNN) were used for artificial neural network models. The Alvarado clinical scoring system was used for comparison with the ANNs. The accuracy of the RBF, PNN, MLNN, and Alvarado was 99.80%, 99.41%, 97.84%, and 72.19%, respectively. The area under ROC (receiver operating characteristic) curve of RBF, PNN, MLNN, and Alvarado was 0.998, 0.993, 0.985, and 0.633, respectively. The proposed models using ANNs for diagnosing appendicitis showed good performances, and were significantly better than the Alvarado clinical scoring system (p < 0.001). With cooperation among facilities, the accuracy for diagnosing this serious health condition can be improved.

  1. Mobility Prediction in Wireless Ad Hoc Networks using Neural Networks

    CERN Document Server

    Kaaniche, Heni

    2010-01-01

    Mobility prediction allows estimating the stability of paths in a mobile wireless Ad Hoc networks. Identifying stable paths helps to improve routing by reducing the overhead and the number of connection interruptions. In this paper, we introduce a neural network based method for mobility prediction in Ad Hoc networks. This method consists of a multi-layer and recurrent neural network using back propagation through time algorithm for training.

  2. Codevelopmental learning between human and humanoid robot using a dynamic neural-network model.

    Science.gov (United States)

    Tani, Jun; Nishimoto, Ryu; Namikawa, Jun; Ito, Masato

    2008-02-01

    This paper examines characteristics of interactive learning between human tutors and a robot having a dynamic neural-network model, which is inspired by human parietal cortex functions. A humanoid robot, with a recurrent neural network that has a hierarchical structure, learns to manipulate objects. Robots learn tasks in repeated self-trials with the assistance of human interaction, which provides physical guidance until the tasks are mastered and learning is consolidated within the neural networks. Experimental results and the analyses showed the following: 1) codevelopmental shaping of task behaviors stems from interactions between the robot and a tutor; 2) dynamic structures for articulating and sequencing of behavior primitives are self-organized in the hierarchically organized network; and 3) such structures can afford both generalization and context dependency in generating skilled behaviors.

  3. Neural network regulation driven by autonomous neural firings

    Science.gov (United States)

    Cho, Myoung Won

    2016-07-01

    Biological neurons naturally fire spontaneously due to the existence of a noisy current. Such autonomous firings may provide a driving force for network formation because synaptic connections can be modified due to neural firings. Here, we study the effect of autonomous firings on network formation. For the temporally asymmetric Hebbian learning, bidirectional connections lose their balance easily and become unidirectional ones. Defining the difference between reciprocal connections as new variables, we could express the learning dynamics as if Ising model spins interact with each other in magnetism. We present a theoretical method to estimate the interaction between the new variables in a neural system. We apply the method to some network systems and find some tendencies of autonomous neural network regulation.

  4. Review On Applications Of Neural Network To Computer Vision

    Science.gov (United States)

    Li, Wei; Nasrabadi, Nasser M.

    1989-03-01

    Neural network models have many potential applications to computer vision due to their parallel structures, learnability, implicit representation of domain knowledge, fault tolerance, and ability of handling statistical data. This paper demonstrates the basic principles, typical models and their applications in this field. Variety of neural models, such as associative memory, multilayer back-propagation perceptron, self-stabilized adaptive resonance network, hierarchical structured neocognitron, high order correlator, network with gating control and other models, can be applied to visual signal recognition, reinforcement, recall, stereo vision, motion, object tracking and other vision processes. Most of the algorithms have been simulated on com-puters. Some have been implemented with special hardware. Some systems use features, such as edges and profiles, of images as the data form for input. Other systems use raw data as input signals to the networks. We will present some novel ideas contained in these approaches and provide a comparison of these methods. Some unsolved problems are mentioned, such as extracting the intrinsic properties of the input information, integrating those low level functions to a high-level cognitive system, achieving invariances and other problems. Perspectives of applications of some human vision models and neural network models are analyzed.

  5. Extending stability through hierarchical clusters in Echo State Networks

    Directory of Open Access Journals (Sweden)

    Sarah Jarvis

    2010-07-01

    Full Text Available Echo State Networks (ESN are reservoir networks that satisfy well-established criteria for stability when constructed as feedforward networks. Recent evidence suggests that stability criteria are altered in the presence of reservoir substructures, such as clusters. Understanding how the reservoir architecture affects stability is thus important for the appropriate design of any ESN. To quantitatively determine the influence of the most relevant network parameters, we analysed the impact of reservoir substructures on stability in hierarchically clustered ESNs (HESN, as they allow a smooth transition from highly structured to increasingly homogeneous reservoirs. Previous studies used the largest eigenvalue of the reservoir connectivity matrix (spectral radius as a predictor for stable network dynamics. Here, we evaluate the impact of clusters, hierarchy and intercluster connectivity on the predictive power of the spectral radius for stability. Both hierarchy and low relative cluster sizes extend the range of spectral radius values, leading to stable networks, while increasing intercluster connectivity decreased maximal spectral radius.

  6. Structure function relationship in complex brain networks expressed by hierarchical synchronization

    Science.gov (United States)

    Zhou, Changsong; Zemanová, Lucia; Zamora-López, Gorka; Hilgetag, Claus C.; Kurths, Jürgen

    2007-06-01

    The brain is one of the most complex systems in nature, with a structured complex connectivity. Recently, large-scale corticocortical connectivities, both structural and functional, have received a great deal of research attention, especially using the approach of complex network analysis. Understanding the relationship between structural and functional connectivity is of crucial importance in neuroscience. Here we try to illuminate this relationship by studying synchronization dynamics in a realistic anatomical network of cat cortical connectivity. We model the nodes (cortical areas) by a neural mass model (population model) or by a subnetwork of interacting excitable neurons (multilevel model). We show that if the dynamics is characterized by well-defined oscillations (neural mass model and subnetworks with strong couplings), the synchronization patterns are mainly determined by the node intensity (total input strengths of a node) and the detailed network topology is rather irrelevant. On the other hand, the multilevel model with weak couplings displays more irregular, biologically plausible dynamics, and the synchronization patterns reveal a hierarchical cluster organization in the network structure. The relationship between structural and functional connectivity at different levels of synchronization is explored. Thus, the study of synchronization in a multilevel complex network model of cortex can provide insights into the relationship between network topology and functional organization of complex brain networks.

  7. Genetic algorithm for neural networks optimization

    Science.gov (United States)

    Setyawati, Bina R.; Creese, Robert C.; Sahirman, Sidharta

    2004-11-01

    This paper examines the forecasting performance of multi-layer feed forward neural networks in modeling a particular foreign exchange rates, i.e. Japanese Yen/US Dollar. The effects of two learning methods, Back Propagation and Genetic Algorithm, in which the neural network topology and other parameters fixed, were investigated. The early results indicate that the application of this hybrid system seems to be well suited for the forecasting of foreign exchange rates. The Neural Networks and Genetic Algorithm were programmed using MATLAB«.

  8. Neural networks techniques applied to reservoir engineering

    Energy Technology Data Exchange (ETDEWEB)

    Flores, M. [Gerencia de Proyectos Geotermoelectricos, Morelia (Mexico); Barragan, C. [RockoHill de Mexico, Indiana (Mexico)

    1995-12-31

    Neural Networks are considered the greatest technological advance since the transistor. They are expected to be a common household item by the year 2000. An attempt to apply Neural Networks to an important geothermal problem has been made, predictions on the well production and well completion during drilling in a geothermal field. This was done in Los Humeros geothermal field, using two common types of Neural Network models, available in commercial software. Results show the learning capacity of the developed model, and its precision in the predictions that were made.

  9. Assessing Landslide Hazard Using Artificial Neural Network

    DEFF Research Database (Denmark)

    Farrokhzad, Farzad; Choobbasti, Asskar Janalizadeh; Barari, Amin

    2011-01-01

    neural network has been developed for use in the stability evaluation of slopes under various geological conditions and engineering requirements. The Artificial neural network model of this research uses slope characteristics as input and leads to the output in form of the probability of failure...... and factor of safety. It can be stated that the trained neural networks are capable of predicting the stability of slopes and safety factor of landslide hazard in study area with an acceptable level of confidence. Landslide hazard analysis and mapping can provide useful information for catastrophic loss...

  10. Estimation of Conditional Quantile using Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1999-01-01

    The problem of estimating conditional quantiles using neural networks is investigated here. A basic structure is developed using the methodology of kernel estimation, and a theory guaranteeing con-sistency on a mild set of assumptions is provided. The constructed structure constitutes a basis...... for the design of a variety of different neural networks, some of which are considered in detail. The task of estimating conditional quantiles is related to Bayes point estimation whereby a broad range of applications within engineering, economics and management can be suggested. Numerical results illustrating...... the capabilities of the elaborated neural network are also given....

  11. Estimation of Conditional Quantile using Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1999-01-01

    The problem of estimating conditional quantiles using neural networks is investigated here. A basic structure is developed using the methodology of kernel estimation, and a theory guaranteeing con-sistency on a mild set of assumptions is provided. The constructed structure constitutes a basis...... for the design of a variety of different neural networks, some of which are considered in detail. The task of estimating conditional quantiles is related to Bayes point estimation whereby a broad range of applications within engineering, economics and management can be suggested. Numerical results illustrating...... the capabilities of the elaborated neural network are also given....

  12. Convolutional Neural Network for Image Recognition

    CERN Document Server

    Seifnashri, Sahand

    2015-01-01

    The aim of this project is to use machine learning techniques especially Convolutional Neural Networks for image processing. These techniques can be used for Quark-Gluon discrimination using calorimeters data, but unfortunately I didn’t manage to get the calorimeters data and I just used the Jet data fromminiaodsim(ak4 chs). The Jet data was not good enough for Convolutional Neural Network which is designed for ’image’ recognition. This report is made of twomain part, part one is mainly about implementing Convolutional Neural Network on unphysical data such as MNIST digits and CIFAR-10 dataset and part 2 is about the Jet data.

  13. Threshold control of chaotic neural network.

    Science.gov (United States)

    He, Guoguang; Shrimali, Manish Dev; Aihara, Kazuyuki

    2008-01-01

    The chaotic neural network constructed with chaotic neurons exhibits rich dynamic behaviour with a nonperiodic associative memory. In the chaotic neural network, however, it is difficult to distinguish the stored patterns in the output patterns because of the chaotic state of the network. In order to apply the nonperiodic associative memory into information search, pattern recognition etc. it is necessary to control chaos in the chaotic neural network. We have studied the chaotic neural network with threshold activated coupling, which provides a controlled network with associative memory dynamics. The network converges to one of its stored patterns or/and reverse patterns which has the smallest Hamming distance from the initial state of the network. The range of the threshold applied to control the neurons in the network depends on the noise level in the initial pattern and decreases with the increase of noise. The chaos control in the chaotic neural network by threshold activated coupling at varying time interval provides controlled output patterns with different temporal periods which depend upon the control parameters.

  14. Method of Parallel-Hierarchical Network Self-Training and its Application for Pattern Classification and Recognition

    Directory of Open Access Journals (Sweden)

    TIMCHENKO, L.

    2012-11-01

    Full Text Available Propositions necessary for development of parallel-hierarchical (PH network training methods are discussed in this article. Unlike already known structures of the artificial neural network, where non-normalized (absolute similarity criteria are used for comparison, the suggested structure uses a normalized criterion. Based on the analysis of training rules, a conclusion is made that application of two training methods with a teacher is optimal for PH network training: error correction-based training and memory-based training. Mathematical models of training and a combined method of PH network training for recognition of static and dynamic patterns are developed.

  15. Nonequilibrium landscape theory of neural networks.

    Science.gov (United States)

    Yan, Han; Zhao, Lei; Hu, Liang; Wang, Xidi; Wang, Erkang; Wang, Jin

    2013-11-05

    The brain map project aims to map out the neuron connections of the human brain. Even with all of the wirings mapped out, the global and physical understandings of the function and behavior are still challenging. Hopfield quantified the learning and memory process of symmetrically connected neural networks globally through equilibrium energy. The energy basins of attractions represent memories, and the memory retrieval dynamics is determined by the energy gradient. However, the realistic neural networks are asymmetrically connected, and oscillations cannot emerge from symmetric neural networks. Here, we developed a nonequilibrium landscape-flux theory for realistic asymmetrically connected neural networks. We uncovered the underlying potential landscape and the associated Lyapunov function for quantifying the global stability and function. We found the dynamics and oscillations in human brains responsible for cognitive processes and physiological rhythm regulations are determined not only by the landscape gradient but also by the flux. We found that the flux is closely related to the degrees of the asymmetric connections in neural networks and is the origin of the neural oscillations. The neural oscillation landscape shows a closed-ring attractor topology. The landscape gradient attracts the network down to the ring. The flux is responsible for coherent oscillations on the ring. We suggest the flux may provide the driving force for associations among memories. We applied our theory to rapid-eye movement sleep cycle. We identified the key regulation factors for function through global sensitivity analysis of landscape topography against wirings, which are in good agreements with experiments.

  16. Nonequilibrium landscape theory of neural networks

    Science.gov (United States)

    Yan, Han; Zhao, Lei; Hu, Liang; Wang, Xidi; Wang, Erkang; Wang, Jin

    2013-01-01

    The brain map project aims to map out the neuron connections of the human brain. Even with all of the wirings mapped out, the global and physical understandings of the function and behavior are still challenging. Hopfield quantified the learning and memory process of symmetrically connected neural networks globally through equilibrium energy. The energy basins of attractions represent memories, and the memory retrieval dynamics is determined by the energy gradient. However, the realistic neural networks are asymmetrically connected, and oscillations cannot emerge from symmetric neural networks. Here, we developed a nonequilibrium landscape–flux theory for realistic asymmetrically connected neural networks. We uncovered the underlying potential landscape and the associated Lyapunov function for quantifying the global stability and function. We found the dynamics and oscillations in human brains responsible for cognitive processes and physiological rhythm regulations are determined not only by the landscape gradient but also by the flux. We found that the flux is closely related to the degrees of the asymmetric connections in neural networks and is the origin of the neural oscillations. The neural oscillation landscape shows a closed-ring attractor topology. The landscape gradient attracts the network down to the ring. The flux is responsible for coherent oscillations on the ring. We suggest the flux may provide the driving force for associations among memories. We applied our theory to rapid-eye movement sleep cycle. We identified the key regulation factors for function through global sensitivity analysis of landscape topography against wirings, which are in good agreements with experiments. PMID:24145451

  17. Character Recognition Using Novel Optoelectronic Neural Network

    Science.gov (United States)

    1993-04-01

    17 2.3.7. Learning rule ................................................................... 18 3. ADALINE ... ADALINE neuron and linear separability which provides a justification for multilayer networks. The MADALINE (many ADALINE ) multi layer network is also...element used In many neural networks (Figure 3.1). The ADALINE functions as an adaptive threshold logic element. In digital Implementation, an input

  18. Neural Network for Estimating Conditional Distribution

    DEFF Research Database (Denmark)

    Schiøler, Henrik; Kulczycki, P.

    Neural networks for estimating conditional distributions and their associated quantiles are investigated in this paper. A basic network structure is developed on the basis of kernel estimation theory, and consistency is proved from a mild set of assumptions. A number of applications within...... statistcs, decision theory and signal processing are suggested, and a numerical example illustrating the capabilities of the elaborated network is given...

  19. Nonlinear System Control Using Neural Networks

    Directory of Open Access Journals (Sweden)

    Jaroslava Žilková

    2006-10-01

    Full Text Available The paper is focused especially on presenting possibilities of applying off-linetrained artificial neural networks at creating the system inverse models that are used atdesigning control algorithm for non-linear dynamic system. The ability of cascadefeedforward neural networks to model arbitrary non-linear functions and their inverses isexploited. This paper presents a quasi-inverse neural model, which works as a speedcontroller of an induction motor. The neural speed controller consists of two cascadefeedforward neural networks subsystems. The first subsystem provides desired statorcurrent components for control algorithm and the second subsystem providescorresponding voltage components for PWM converter. The availability of the proposedcontroller is verified through the MATLAB simulation. The effectiveness of the controller isdemonstrated for different operating conditions of the drive system.

  20. Hierarchical Resource Allocation in Femtocell Networks using Graph Algorithms

    CERN Document Server

    Sadr, Sanam

    2012-01-01

    This paper presents a hierarchical approach to resource allocation in open-access femtocell networks. The major challenge in femtocell networks is interference management which in our system, based on the Long Term Evolution (LTE) standard, translates to which user should be allocated which physical resource block (or fraction thereof) from which femtocell access point (FAP). The globally optimal solution requires integer programming and is mathematically intractable. We propose a hierarchical three-stage solution: first, the load of each FAP is estimated considering the number of users connected to the FAP, their average channel gain and required data rates. Second, based on each FAP's load, the physical resource blocks (PRBs) are allocated to FAPs in a manner that minimizes the interference by coloring the modified interference graph. Finally, the resource allocation is performed at each FAP considering users' instantaneous channel gain. The two major advantages of this suboptimal approach are the significa...

  1. Replanning Using Hierarchical Task Network and Operator-Based Planning

    Science.gov (United States)

    Wang, X.; Chien, S.

    1997-01-01

    In order to scale-up to real-world problems, planning systems must be able to replan in order to deal with changes in problem context. In this paper we describe hierarchical task network and operatorbased re-planning techniques which allow adaptation of a previous plan to account for problems associated with executing plans in real-world domains with uncertainty, concurrency, changing objectives.

  2. Recognition of Telugu characters using neural networks.

    Science.gov (United States)

    Sukhaswami, M B; Seetharamulu, P; Pujari, A K

    1995-09-01

    The aim of the present work is to recognize printed and handwritten Telugu characters using artificial neural networks (ANNs). Earlier work on recognition of Telugu characters has been done using conventional pattern recognition techniques. We make an initial attempt here of using neural networks for recognition with the aim of improving upon earlier methods which do not perform effectively in the presence of noise and distortion in the characters. The Hopfield model of neural network working as an associative memory is chosen for recognition purposes initially. Due to limitation in the capacity of the Hopfield neural network, we propose a new scheme named here as the Multiple Neural Network Associative Memory (MNNAM). The limitation in storage capacity has been overcome by combining multiple neural networks which work in parallel. It is also demonstrated that the Hopfield network is suitable for recognizing noisy printed characters as well as handwritten characters written by different "hands" in a variety of styles. Detailed experiments have been carried out using several learning strategies and results are reported. It is shown here that satisfactory recognition is possible using the proposed strategy. A detailed preprocessing scheme of the Telugu characters from digitized documents is also described.

  3. Multi-mode clustering model for hierarchical wireless sensor networks

    Science.gov (United States)

    Hu, Xiangdong; Li, Yongfu; Xu, Huifen

    2017-03-01

    The topology management, i.e., clusters maintenance, of wireless sensor networks (WSNs) is still a challenge due to its numerous nodes, diverse application scenarios and limited resources as well as complex dynamics. To address this issue, a multi-mode clustering model (M2 CM) is proposed to maintain the clusters for hierarchical WSNs in this study. In particular, unlike the traditional time-trigger model based on the whole-network and periodic style, the M2 CM is proposed based on the local and event-trigger operations. In addition, an adaptive local maintenance algorithm is designed for the broken clusters in the WSNs using the spatial-temporal demand changes accordingly. Numerical experiments are performed using the NS2 network simulation platform. Results validate the effectiveness of the proposed model with respect to the network maintenance costs, node energy consumption and transmitted data as well as the network lifetime.

  4. An Introduction to Neural Networks for Hearing Aid Noise Recognition.

    Science.gov (United States)

    Kim, Jun W.; Tyler, Richard S.

    1995-01-01

    This article introduces the use of multilayered artificial neural networks in hearing aid noise recognition. It reviews basic principles of neural networks, and offers an example of an application in which a neural network is used to identify the presence or absence of noise in speech. The ability of neural networks to "learn" the…

  5. Neural Networks for Dynamic Flight Control

    Science.gov (United States)

    1993-12-01

    uses the Adaline (22) model for development of the neural networks. Neural Graphics and other AFIT applications use a slightly different model. The...primary difference in the Nguyen application is that the Adaline uses the nonlinear function .f(a) = tanh(a) where standard backprop uses the sigmoid

  6. Hierarchical Communication Network Architectures for Offshore Wind Power Farms

    Directory of Open Access Journals (Sweden)

    Mohamed A. Ahmed

    2014-05-01

    Full Text Available Nowadays, large-scale wind power farms (WPFs bring new challenges for both electric systems and communication networks. Communication networks are an essential part of WPFs because they provide real-time control and monitoring of wind turbines from a remote location (local control center. However, different wind turbine applications have different requirements in terms of data volume, latency, bandwidth, QoS, etc. This paper proposes a hierarchical communication network architecture that consist of a turbine area network (TAN, farm area network (FAN, and control area network (CAN for offshore WPFs. The two types of offshore WPFs studied are small-scale WPFs close to the grid and medium-scale WPFs far from the grid. The wind turbines are modelled based on the logical nodes (LN concepts of the IEC 61400-25 standard. To keep pace with current developments in wind turbine technology, the network design takes into account the extension of the LNs for both the wind turbine foundation and meteorological measurements. The proposed hierarchical communication network is based on Switched Ethernet. Servers at the control center are used to store and process the data received from the WPF. The network architecture is modelled and evaluated via OPNET. We investigated the end-to-end (ETE delay for different WPF applications. The results are validated by comparing the amount of generated sensing data with that of received traffic at servers. The network performance is evaluated, analyzed and discussed in view of end-to-end (ETE delay for different link bandwidths.

  7. Neural networks convergence using physicochemical data.

    Science.gov (United States)

    Karelson, Mati; Dobchev, Dimitar A; Kulshyn, Oleksandr V; Katritzky, Alan R

    2006-01-01

    An investigation of the neural network convergence and prediction based on three optimization algorithms, namely, Levenberg-Marquardt, conjugate gradient, and delta rule, is described. Several simulated neural networks built using the above three algorithms indicated that the Levenberg-Marquardt optimizer implemented as a back-propagation neural network converged faster than the other two algorithms and provides in most of the cases better prediction. These conclusions are based on eight physicochemical data sets, each with a significant number of compounds comparable to that usually used in the QSAR/QSPR modeling. The superiority of the Levenberg-Marquardt algorithm is revealed in terms of functional dependence of the change of the neural network weights with respect to the gradient of the error propagation as well as distribution of the weight values. The prediction of the models is assessed by the error of the validation sets not used in the training process.

  8. Application of neural networks in coastal engineering

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.

    methods. That is why it is becoming popular in various fields including coastal engineering. Waves and tides will play important roles in coastal erosion or accretion. This paper briefly describes the back-propagation neural networks and its application...

  9. Neural Network Based 3D Surface Reconstruction

    Directory of Open Access Journals (Sweden)

    Vincy Joseph

    2009-11-01

    Full Text Available This paper proposes a novel neural-network-based adaptive hybrid-reflectance three-dimensional (3-D surface reconstruction model. The neural network combines the diffuse and specular components into a hybrid model. The proposed model considers the characteristics of each point and the variant albedo to prevent the reconstructed surface from being distorted. The neural network inputs are the pixel values of the two-dimensional images to be reconstructed. The normal vectors of the surface can then be obtained from the output of the neural network after supervised learning, where the illuminant direction does not have to be known in advance. Finally, the obtained normal vectors can be applied to integration method when reconstructing 3-D objects. Facial images were used for training in the proposed approach

  10. Control of autonomous robot using neural networks

    Science.gov (United States)

    Barton, Adam; Volna, Eva

    2017-07-01

    The aim of the article is to design a method of control of an autonomous robot using artificial neural networks. The introductory part describes control issues from the perspective of autonomous robot navigation and the current mobile robots controlled by neural networks. The core of the article is the design of the controlling neural network, and generation and filtration of the training set using ART1 (Adaptive Resonance Theory). The outcome of the practical part is an assembled Lego Mindstorms EV3 robot solving the problem of avoiding obstacles in space. To verify models of an autonomous robot behavior, a set of experiments was created as well as evaluation criteria. The speed of each motor was adjusted by the controlling neural network with respect to the situation in which the robot was found.

  11. Additive Feed Forward Control with Neural Networks

    DEFF Research Database (Denmark)

    Sørensen, O.

    1999-01-01

    This paper demonstrates a method to control a non-linear, multivariable, noisy process using trained neural networks. The basis for the method is a trained neural network controller acting as the inverse process model. A training method for obtaining such an inverse process model is applied....... A suitable 'shaped' (low-pass filtered) reference is used to overcome problems with excessive control action when using a controller acting as the inverse process model. The control concept is Additive Feed Forward Control, where the trained neural network controller, acting as the inverse process model......, is placed in a supplementary pure feed-forward path to an existing feedback controller. This concept benefits from the fact, that an existing, traditional designed, feedback controller can be retained without any modifications, and after training the connection of the neural network feed-forward controller...

  12. TIME SERIES FORECASTING USING NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    BOGDAN OANCEA

    2013-05-01

    Full Text Available Recent studies have shown the classification and prediction power of the Neural Networks. It has been demonstrated that a NN can approximate any continuous function. Neural networks have been successfully used for forecasting of financial data series. The classical methods used for time series prediction like Box-Jenkins or ARIMA assumes that there is a linear relationship between inputs and outputs. Neural Networks have the advantage that can approximate nonlinear functions. In this paper we compared the performances of different feed forward and recurrent neural networks and training algorithms for predicting the exchange rate EUR/RON and USD/RON. We used data series with daily exchange rates starting from 2005 until 2013.

  13. Artificial neural networks a practical course

    CERN Document Server

    da Silva, Ivan Nunes; Andrade Flauzino, Rogerio; Liboni, Luisa Helena Bartocci; dos Reis Alves, Silas Franco

    2017-01-01

    This book provides comprehensive coverage of neural networks, their evolution, their structure, the problems they can solve, and their applications. The first half of the book looks at theoretical investigations on artificial neural networks and addresses the key architectures that are capable of implementation in various application scenarios. The second half is designed specifically for the production of solutions using artificial neural networks to solve practical problems arising from different areas of knowledge. It also describes the various implementation details that were taken into account to achieve the reported results. These aspects contribute to the maturation and improvement of experimental techniques to specify the neural network architecture that is most appropriate for a particular application scope. The book is appropriate for students in graduate and upper undergraduate courses in addition to researchers and professionals.

  14. Additive Feed Forward Control with Neural Networks

    DEFF Research Database (Denmark)

    Sørensen, O.

    1999-01-01

    This paper demonstrates a method to control a non-linear, multivariable, noisy process using trained neural networks. The basis for the method is a trained neural network controller acting as the inverse process model. A training method for obtaining such an inverse process model is applied....... A suitable 'shaped' (low-pass filtered) reference is used to overcome problems with excessive control action when using a controller acting as the inverse process model. The control concept is Additive Feed Forward Control, where the trained neural network controller, acting as the inverse process model......, is placed in a supplementary pure feed-forward path to an existing feedback controller. This concept benefits from the fact, that an existing, traditional designed, feedback controller can be retained without any modifications, and after training the connection of the neural network feed-forward controller...

  15. Decentralized Cooperative TOA/AOA Target Tracking for Hierarchical Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Chih-Yu Wen

    2012-11-01

    Full Text Available This paper proposes a distributed method for cooperative target tracking in hierarchical wireless sensor networks. The concept of leader-based information processingis conducted to achieve object positioning, considering a cluster-based network topology. Random timers and local information are applied to adaptively select a sub-cluster for thelocalization task. The proposed energy-efficient tracking algorithm allows each sub-clustermember to locally estimate the target position with a Bayesian filtering framework and a neural networking model, and further performs estimation fusion in the leader node with the covariance intersection algorithm. This paper evaluates the merits and trade-offs of the protocol design towards developing more efficient and practical algorithms for objectposition estimation. 

  16. Decentralized cooperative TOA/AOA target tracking for hierarchical wireless sensor networks.

    Science.gov (United States)

    Chen, Ying-Chih; Wen, Chih-Yu

    2012-11-08

    This paper proposes a distributed method for cooperative target tracking in hierarchical wireless sensor networks. The concept of leader-based information processing is conducted to achieve object positioning, considering a cluster-based network topology. Random timers and local information are applied to adaptively select a sub-cluster for the localization task. The proposed energy-efficient tracking algorithm allows each sub-cluster member to locally estimate the target position with a Bayesian filtering framework and a neural networking model, and further performs estimation fusion in the leader node with the covariance intersection algorithm. This paper evaluates the merits and trade-offs of the protocol design towards developing more efficient and practical algorithms for object position estimation.

  17. Deep Dynamic Neural Networks for Multimodal Gesture Segmentation and Recognition.

    Science.gov (United States)

    Wu, Di; Pigou, Lionel; Kindermans, Pieter-Jan; Le, Nam Do-Hoang; Shao, Ling; Dambre, Joni; Odobez, Jean-Marc

    2016-08-01

    This paper describes a novel method called Deep Dynamic Neural Networks (DDNN) for multimodal gesture recognition. A semi-supervised hierarchical dynamic framework based on a Hidden Markov Model (HMM) is proposed for simultaneous gesture segmentation and recognition where skeleton joint information, depth and RGB images, are the multimodal input observations. Unlike most traditional approaches that rely on the construction of complex handcrafted features, our approach learns high-level spatio-temporal representations using deep neural networks suited to the input modality: a Gaussian-Bernouilli Deep Belief Network (DBN) to handle skeletal dynamics, and a 3D Convolutional Neural Network (3DCNN) to manage and fuse batches of depth and RGB images. This is achieved through the modeling and learning of the emission probabilities of the HMM required to infer the gesture sequence. This purely data driven approach achieves a Jaccard index score of 0.81 in the ChaLearn LAP gesture spotting challenge. The performance is on par with a variety of state-of-the-art hand-tuned feature-based approaches and other learning-based methods, therefore opening the door to the use of deep learning techniques in order to further explore multimodal time series data.

  18. Multiscale approach for bone remodeling simulation based on finite element and neural network computation

    CERN Document Server

    Hambli, Ridha

    2011-01-01

    The aim of this paper is to develop a multiscale hierarchical hybrid model based on finite element analysis and neural network computation to link mesoscopic scale (trabecular network level) and macroscopic (whole bone level) to simulate bone remodelling process. Because whole bone simulation considering the 3D trabecular level is time consuming, the finite element calculation is performed at macroscopic level and a trained neural network are employed as numerical devices for substituting the finite element code needed for the mesoscale prediction. The bone mechanical properties are updated at macroscopic scale depending on the morphological organization at the mesoscopic computed by the trained neural network. The digital image-based modeling technique using m-CT and voxel finite element mesh is used to capture 2 mm3 Representative Volume Elements at mesoscale level in a femur head. The input data for the artificial neural network are a set of bone material parameters, boundary conditions and the applied str...

  19. Artificial neural network and medicine.

    Science.gov (United States)

    Khan, Z H; Mohapatra, S K; Khodiar, P K; Ragu Kumar, S N

    1998-07-01

    The introduction of human brain functions such as perception and cognition into the computer has been made possible by the use of Artificial Neural Network (ANN). ANN are computer models inspired by the structure and behavior of neurons. Like the brain, ANN can recognize patterns, manage data and most significantly, learn. This learning ability, not seen in other computer models simulating human intelligence, constantly improves its functional accuracy as it keeps on performing. Experience is as important for an ANN as it is for man. It is being increasingly used to supplement and even (may be) replace experts, in medicine. However, there is still scope for improvement in some areas. Its ability to classify and interpret various forms of medical data comes as a helping hand to clinical decision making in both diagnosis and treatment. Treatment planning in medicine, radiotherapy, rehabilitation, etc. is being done using ANN. Morbidity and mortality prediction by ANN in different medical situations can be very helpful for hospital management. ANN has a promising future in fundamental research, medical education and surgical robotics.

  20. Neural network for image segmentation

    Science.gov (United States)

    Skourikhine, Alexei N.; Prasad, Lakshman; Schlei, Bernd R.

    2000-10-01

    Image analysis is an important requirement of many artificial intelligence systems. Though great effort has been devoted to inventing efficient algorithms for image analysis, there is still much work to be done. It is natural to turn to mammalian vision systems for guidance because they are the best known performers of visual tasks. The pulse- coupled neural network (PCNN) model of the cat visual cortex has proven to have interesting properties for image processing. This article describes the PCNN application to the processing of images of heterogeneous materials; specifically PCNN is applied to image denoising and image segmentation. Our results show that PCNNs do well at segmentation if we perform image smoothing prior to segmentation. We use PCNN for obth smoothing and segmentation. Combining smoothing and segmentation enable us to eliminate PCNN sensitivity to the setting of the various PCNN parameters whose optimal selection can be difficult and can vary even for the same problem. This approach makes image processing based on PCNN more automatic in our application and also results in better segmentation.

  1. Pattern Recognition Using Neural Networks

    Directory of Open Access Journals (Sweden)

    Santaji Ghorpade

    2010-12-01

    Full Text Available Face Recognition has been identified as one of the attracting research areas and it has drawn the attention of many researchers due to its varying applications such as security systems, medical systems,entertainment, etc. Face recognition is the preferred mode of identification by humans: it is natural,robust and non-intrusive. A wide variety of systems requires reliable personal recognition schemes to either confirm or determine the identity of an individual requesting their services. The purpose of such schemes is to ensure that the rendered services are accessed only by a legitimate user and no one else.Examples of such applications include secure access to buildings, computer systems, laptops, cellular phones, and ATMs. In the absence of robust personal recognition schemes, these systems are vulnerable to the wiles of an impostor.In this paper we have developed and illustrated a recognition system for human faces using a novel Kohonen self-organizing map (SOM or Self-Organizing Feature Map (SOFM based retrieval system.SOM has good feature extracting property due to its topological ordering. The Facial Analytics results for the 400 images of AT&T database reflects that the face recognition rate using one of the neural network algorithm SOM is 85.5% for 40 persons.

  2. Applications of Pulse-Coupled Neural Networks

    CERN Document Server

    Ma, Yide; Wang, Zhaobin

    2011-01-01

    "Applications of Pulse-Coupled Neural Networks" explores the fields of image processing, including image filtering, image segmentation, image fusion, image coding, image retrieval, and biometric recognition, and the role of pulse-coupled neural networks in these fields. This book is intended for researchers and graduate students in artificial intelligence, pattern recognition, electronic engineering, and computer science. Prof. Yide Ma conducts research on intelligent information processing, biomedical image processing, and embedded system development at the School of Information Sci

  3. NARX neural networks for sequence processing tasks

    OpenAIRE

    Hristev, Eugen

    2012-01-01

    This project aims at researching and implementing a neural network architecture system for the NARX (Nonlinear AutoRegressive with eXogenous inputs) model, used in sequence processing tasks and particularly in time series prediction. The model can fallback to different types of architectures including time-delay neural networks and multi layer perceptron. The NARX simulator tests and compares the different architectures for both synthetic and real data, including the time series o...

  4. Neural network models of protein domain evolution

    OpenAIRE

    Sylvia Nagl

    2000-01-01

    Protein domains are complex adaptive systems, and here a novel procedure is presented that models the evolution of new functional sites within stable domain folds using neural networks. Neural networks, which were originally developed in cognitive science for the modeling of brain functions, can provide a fruitful methodology for the study of complex systems in general. Ethical implications of developing complex systems models of biomolecules are discussed, with particular reference to molecu...

  5. Routing and wavelength assignment in hierarchical WDM networks

    Institute of Scientific and Technical Information of China (English)

    Yiyi LU; Ruxiang JIN; Chen HE

    2008-01-01

    A new routing and wavelength assignment method applied in hierarchical wavelength division multiplexing(WDM)networks is proposed.The algorithm is called offiine band priority algorithm(offiine BPA).The offline BPA targets to maximize the number of waveband paths under the condition of minimum number of wavelengths,and solve the routing and wavelength assignment(RWA)problem with waveband grooming to reduce cost.Based on the circle construction algorithm,waveband priority function is introduced to calculate the RWA problem.Simulation results demonstrate that the proposed algorithm achieves significant cost reduction in WDM network construction.

  6. Hierarchical control based on Hopfield network for nonseparable optimization problems

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    The nonseparable optimization control problem is considered, where the overall objective function is not of an additive form with respect to subsystems. Since there exists the problem that computation is very slow when using iterative algorithms in multiobjective optimization, Hopfield optimization hierarchical network based on IPM is presented to overcome such slow computation difficulty. Asymptotic stability of this Hopfield network is proved and its equilibrium point is the optimal point of the original problem. The simulation shows that the net is effective to deal with the optimization control problem for large-scale nonseparable steady state systems.

  7. Neural network segmentation of magnetic resonance images

    Science.gov (United States)

    Frederick, Blaise

    1990-07-01

    Neural networks are well adapted to the task of grouping input patterns into subsets which share some similarity. Moreover once trained they can generalize their classification rules to classify new data sets. Sets of pixel intensities from magnetic resonance (MR) images provide a natural input to a neural network by varying imaging parameters MR images can reflect various independent physical parameters of tissues in their pixel intensities. A neural net can then be trained to classify physically similar tissue types based on sets of pixel intensities resulting from different imaging studies on the same subject. A neural network classifier for image segmentation was implemented on a Sun 4/60 and was tested on the task of classifying tissues of canine head MR images. Four images of a transaxial slice with different imaging sequences were taken as input to the network (three spin-echo images and an inversion recovery image). The training set consisted of 691 representative samples of gray matter white matter cerebrospinal fluid bone and muscle preclassified by a neuroscientist. The network was trained using a fast backpropagation algorithm to derive the decision criteria to classify any location in the image by its pixel intensities and the image was subsequently segmented by the classifier. The classifier''s performance was evaluated as a function of network size number of network layers and length of training. A single layer neural network performed quite well at

  8. Logarithmic learning for generalized classifier neural network.

    Science.gov (United States)

    Ozyildirim, Buse Melis; Avci, Mutlu

    2014-12-01

    Generalized classifier neural network is introduced as an efficient classifier among the others. Unless the initial smoothing parameter value is close to the optimal one, generalized classifier neural network suffers from convergence problem and requires quite a long time to converge. In this work, to overcome this problem, a logarithmic learning approach is proposed. The proposed method uses logarithmic cost function instead of squared error. Minimization of this cost function reduces the number of iterations used for reaching the minima. The proposed method is tested on 15 different data sets and performance of logarithmic learning generalized classifier neural network is compared with that of standard one. Thanks to operation range of radial basis function included by generalized classifier neural network, proposed logarithmic approach and its derivative has continuous values. This makes it possible to adopt the advantage of logarithmic fast convergence by the proposed learning method. Due to fast convergence ability of logarithmic cost function, training time is maximally decreased to 99.2%. In addition to decrease in training time, classification performance may also be improved till 60%. According to the test results, while the proposed method provides a solution for time requirement problem of generalized classifier neural network, it may also improve the classification accuracy. The proposed method can be considered as an efficient way for reducing the time requirement problem of generalized classifier neural network. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Diabetic retinopathy screening using deep neural network.

    Science.gov (United States)

    Ramachandran, Nishanthan; Chiong, Hong Sheng; Sime, Mary Jane; Wilson, Graham A

    2017-09-07

    Importance There is a burgeoning interest in the use of deep neural network in diabetic retinal screening. To determine whether a deep neural network could satisfactorily detect diabetic retinopathy that requires referral to an ophthalmologist from a local diabetic retinal screening programme and an international database. Design Retrospective audit Samples Diabetic retinal photos from Otago database photographed during October 2016 (485 photos); and 1200 photos from Messidor international database. Receiver operating characteristic curve to illustrate the ability of a deep neural network to identify referable diabetic retinopathy (moderate or worse diabetic retinopathy or exudates within one disc diameter of the fovea). Main Outcome Measures Area under the receiver operating characteristic curve, sensitivity and specificity RESULTS: For detecting referable diabetic retinopathy, the deep neural network had an area under receiver operating characteristic curve of 0.901 (95% CI, 0.807-0.995) with 84.6% sensitivity and 79.7% specificity for Otago and 0.980 (95% CI, 0.973-0.986) with 96.0% sensitivity and 90.0% specificity for Messidor. Conclusions and Relevance This study has shown that a deep neural network can detect referable diabetic retinopathy with sensitivities and specificities close to or better than 80% from both an international and a domestic (New Zealand) database. We believe that deep neural networks can be integrated into community screening once they can successfully detect both diabetic retinopathy and diabetic macular oedema. This article is protected by copyright. All rights reserved.

  10. Neural networks for segmentation, tracking, and identification

    Science.gov (United States)

    Rogers, Steven K.; Ruck, Dennis W.; Priddy, Kevin L.; Tarr, Gregory L.

    1992-09-01

    The main thrust of this paper is to encourage the use of neural networks to process raw data for subsequent classification. This article addresses neural network techniques for processing raw pixel information. For this paper the definition of neural networks includes the conventional artificial neural networks such as the multilayer perceptrons and also biologically inspired processing techniques. Previously, we have successfully used the biologically inspired Gabor transform to process raw pixel information and segment images. In this paper we extend those ideas to both segment and track objects in multiframe sequences. It is also desirable for the neural network processing data to learn features for subsequent recognition. A common first step for processing raw data is to transform the data and use the transform coefficients as features for recognition. For example, handwritten English characters become linearly separable in the feature space of the low frequency Fourier coefficients. Much of human visual perception can be modelled by assuming low frequency Fourier as the feature space used by the human visual system. The optimum linear transform, with respect to reconstruction, is the Karhunen-Loeve transform (KLT). It has been shown that some neural network architectures can compute approximations to the KLT. The KLT coefficients can be used for recognition as well as for compression. We tested the use of the KLT on the problem of interfacing a nonverbal patient to a computer. The KLT uses an optimal basis set for object reconstruction. For object recognition, the KLT may not be optimal.

  11. Hopfield neural network based on ant system

    Institute of Scientific and Technical Information of China (English)

    洪炳镕; 金飞虎; 郭琦

    2004-01-01

    Hopfield neural network is a single layer feedforward neural network. Hopfield network requires some control parameters to be carefully selected, else the network is apt to converge to local minimum. An ant system is a nature inspired meta heuristic algorithm. It has been applied to several combinatorial optimization problems such as Traveling Salesman Problem, Scheduling Problems, etc. This paper will show an ant system may be used in tuning the network control parameters by a group of cooperated ants. The major advantage of this network is to adjust the network parameters automatically, avoiding a blind search for the set of control parameters.This network was tested on two TSP problems, 5 cities and 10 cities. The results have shown an obvious improvement.

  12. An attractor-based complexity measurement for Boolean recurrent neural networks.

    Science.gov (United States)

    Cabessa, Jérémie; Villa, Alessandro E P

    2014-01-01

    We provide a novel refined attractor-based complexity measurement for Boolean recurrent neural networks that represents an assessment of their computational power in terms of the significance of their attractor dynamics. This complexity measurement is achieved by first proving a computational equivalence between Boolean recurrent neural networks and some specific class of ω-automata, and then translating the most refined classification of ω-automata to the Boolean neural network context. As a result, a hierarchical classification of Boolean neural networks based on their attractive dynamics is obtained, thus providing a novel refined attractor-based complexity measurement for Boolean recurrent neural networks. These results provide new theoretical insights to the computational and dynamical capabilities of neural networks according to their attractive potentialities. An application of our findings is illustrated by the analysis of the dynamics of a simplified model of the basal ganglia-thalamocortical network simulated by a Boolean recurrent neural network. This example shows the significance of measuring network complexity, and how our results bear new founding elements for the understanding of the complexity of real brain circuits.

  13. An attractor-based complexity measurement for Boolean recurrent neural networks.

    Directory of Open Access Journals (Sweden)

    Jérémie Cabessa

    Full Text Available We provide a novel refined attractor-based complexity measurement for Boolean recurrent neural networks that represents an assessment of their computational power in terms of the significance of their attractor dynamics. This complexity measurement is achieved by first proving a computational equivalence between Boolean recurrent neural networks and some specific class of ω-automata, and then translating the most refined classification of ω-automata to the Boolean neural network context. As a result, a hierarchical classification of Boolean neural networks based on their attractive dynamics is obtained, thus providing a novel refined attractor-based complexity measurement for Boolean recurrent neural networks. These results provide new theoretical insights to the computational and dynamical capabilities of neural networks according to their attractive potentialities. An application of our findings is illustrated by the analysis of the dynamics of a simplified model of the basal ganglia-thalamocortical network simulated by a Boolean recurrent neural network. This example shows the significance of measuring network complexity, and how our results bear new founding elements for the understanding of the complexity of real brain circuits.

  14. Neural-Network Object-Recognition Program

    Science.gov (United States)

    Spirkovska, L.; Reid, M. B.

    1993-01-01

    HONTIOR computer program implements third-order neural network exhibiting invariance under translation, change of scale, and in-plane rotation. Invariance incorporated directly into architecture of network. Only one view of each object needed to train network for two-dimensional-translation-invariant recognition of object. Also used for three-dimensional-transformation-invariant recognition by training network on only set of out-of-plane rotated views. Written in C language.

  15. Hidden neural networks: application to speech recognition

    DEFF Research Database (Denmark)

    Riis, Søren Kamaric

    1998-01-01

    We evaluate the hidden neural network HMM/NN hybrid on two speech recognition benchmark tasks; (1) task independent isolated word recognition on the Phonebook database, and (2) recognition of broad phoneme classes in continuous speech from the TIMIT database. It is shown how hidden neural networks...... (HNNs) with much fewer parameters than conventional HMMs and other hybrids can obtain comparable performance, and for the broad class task it is illustrated how the HNN can be applied as a purely transition based system, where acoustic context dependent transition probabilities are estimated by neural...

  16. Hierarchical organization of brain functional network during visual task

    CERN Document Server

    Zhuo, Zhao; Fu, Zhong-Qian; Zhang, Jie

    2011-01-01

    In this paper, the brain functional networks derived from high-resolution synchronous EEG time series during visual task are generated by calculating the phase synchronization among the time series. The hierarchical modular organizations of these networks are systematically investigated by the fast Girvan-Newman algorithm. At the same time, the spatially adjacent electrodes (corresponding to EEG channels) are clustered into functional groups based on anatomical parcellation of brain cortex, and this clustering information are compared to that of the functional network. The results show that the modular architectures of brain functional network are in coincidence with that from the anatomical structures over different levels of hierarchy, which suggests that population of neurons performing the same function excite and inhibit in identical rhythms. The structure-function relationship further reveals that the correlations among EEG time series in the same functional group are much stronger than those in differe...

  17. Matrix representation of a Neural Network

    DEFF Research Database (Denmark)

    Christensen, Bjørn Klint

    This paper describes the implementation of a three-layer feedforward backpropagation neural network. The paper does not explain feedforward, backpropagation or what a neural network is. It is assumed, that the reader knows all this. If not please read chapters 2, 8 and 9 in Parallel Distributed...... Processing, by David Rummelhart (Rummelhart 1986) for an easy-to-read introduction. What the paper does explain is how a matrix representation of a neural net allows for a very simple implementation. The matrix representation is introduced in (Rummelhart 1986, chapter 9), but only for a two-layer linear...... network and the feedforward algorithm. This paper develops the idea further to three-layer non-linear networks and the backpropagation algorithm. Figure 1 shows the layout of a three-layer network. There are I input nodes, J hidden nodes and K output nodes all indexed from 0. Bias-node for the hidden...

  18. Application of Partially Connected Neural Network

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    This paper focuses mainly on application of Partially Connected Backpropagation Neural Network (PCBP) instead of typical Fully Connected Neural Network (FCBP). The initial neural network is fully connected, after training with sample data using cross-entropy as error function, a clustering method is employed to cluster weights between inputs to hidden layer and from hidden to output layer, and connections that are relatively unnecessary are deleted, thus the initial network becomes a PCBP network.Then PCBP can be used in prediction or data mining by training PCBP with data that comes from database. At the end of this paper, several experiments are conducted to illustrate the effects of PCBP using Iris data set.

  19. Complex networks with scale-free nature and hierarchical modularity

    Science.gov (United States)

    Shekatkar, Snehal M.; Ambika, G.

    2015-09-01

    Generative mechanisms which lead to empirically observed structure of networked systems from diverse fields like biology, technology and social sciences form a very important part of study of complex networks. The structure of many networked systems like biological cell, human society and World Wide Web markedly deviate from that of completely random networks indicating the presence of underlying processes. Often the main process involved in their evolution is the addition of links between existing nodes having a common neighbor. In this context we introduce an important property of the nodes, which we call mediating capacity, that is generic to many networks. This capacity decreases rapidly with increase in degree, making hubs weak mediators of the process. We show that this property of nodes provides an explanation for the simultaneous occurrence of the observed scale-free structure and hierarchical modularity in many networked systems. This also explains the high clustering and small-path length seen in real networks as well as non-zero degree-correlations. Our study also provides insight into the local process which ultimately leads to emergence of preferential attachment and hence is also important in understanding robustness and control of real networks as well as processes happening on real networks.

  20. Hierarchical network model for the analysis of human spatio-temporal information processing

    Science.gov (United States)

    Schill, Kerstin; Baier, Volker; Roehrbein, Florian; Brauer, Wilfried

    2001-06-01

    The perception of spatio-temporal pattern is a fundamental part of visual cognition. In order to understand more about the principles behind these biological processes, we are analyzing and modeling the presentation of spatio-temporal structures on different levels of abstraction. For the low- level processing of motion information we have argued for the existence of a spatio-temporal memory in early vision. The basic properties of this structure are reflected in a neural network model which is currently developed. Here we discuss major architectural features of this network which is base don Kohonens SOMs. In order to enable the representation, processing and prediction of spatio-temporal pattern on different levels of granularity and abstraction the SOMs are organized in a hierarchical manner. The model has the advantage of a 'self-teaching' learning algorithm and stored temporal information try local feedback in each computational layer. The constraints for the neural modeling and data set for training the neural network are obtained by psychophysical experiments where human subjects' abilities for dealing with spatio-temporal information is investigated.

  1. On neural networks that design neural associative memories.

    Science.gov (United States)

    Chan, H Y; Zak, S H

    1997-01-01

    The design problem of generalized brain-state-in-a-box (GBSB) type associative memories is formulated as a constrained optimization program, and "designer" neural networks for solving the program in real time are proposed. The stability of the designer networks is analyzed using Barbalat's lemma. The analyzed and synthesized neural associative memories do not require symmetric weight matrices. Two types of the GBSB-based associative memories are analyzed, one when the network trajectories are constrained to reside in the hypercube [-1, 1](n) and the other type when the network trajectories are confined to stay in the hypercube [0, 1](n). Numerical examples and simulations are presented to illustrate the results obtained.

  2. Artificial astrocytes improve neural network performance.

    Science.gov (United States)

    Porto-Pazos, Ana B; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-04-19

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function.

  3. Hardware implementation of stochastic spiking neural networks.

    Science.gov (United States)

    Rosselló, Josep L; Canals, Vincent; Morro, Antoni; Oliver, Antoni

    2012-08-01

    Spiking Neural Networks, the last generation of Artificial Neural Networks, are characterized by its bio-inspired nature and by a higher computational capacity with respect to other neural models. In real biological neurons, stochastic processes represent an important mechanism of neural behavior and are responsible of its special arithmetic capabilities. In this work we present a simple hardware implementation of spiking neurons that considers this probabilistic nature. The advantage of the proposed implementation is that it is fully digital and therefore can be massively implemented in Field Programmable Gate Arrays. The high computational capabilities of the proposed model are demonstrated by the study of both feed-forward and recurrent networks that are able to implement high-speed signal filtering and to solve complex systems of linear equations.

  4. Stability prediction of berm breakwater using neural network

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Rao, S.; Manjunath, Y.R.

    . In order to allow the network to learn both non-linear and linear relationships between input nodes and output nodes, multiple-layer networks are often used. Among many neural network architectures, the three layers feed forward backpropagation neural...

  5. Pattern Classification using Simplified Neural Networks

    CERN Document Server

    Kamruzzaman, S M

    2010-01-01

    In recent years, many neural network models have been proposed for pattern classification, function approximation and regression problems. This paper presents an approach for classifying patterns from simplified NNs. Although the predictive accuracy of ANNs is often higher than that of other methods or human experts, it is often said that ANNs are practically "black boxes", due to the complexity of the networks. In this paper, we have an attempted to open up these black boxes by reducing the complexity of the network. The factor makes this possible is the pruning algorithm. By eliminating redundant weights, redundant input and hidden units are identified and removed from the network. Using the pruning algorithm, we have been able to prune networks such that only a few input units, hidden units and connections left yield a simplified network. Experimental results on several benchmarks problems in neural networks show the effectiveness of the proposed approach with good generalization ability.

  6. Neural network for graphs: a contextual constructive approach.

    Science.gov (United States)

    Micheli, Alessio

    2009-03-01

    This paper presents a new approach for learning in structured domains (SDs) using a constructive neural network for graphs (NN4G). The new model allows the extension of the input domain for supervised neural networks to a general class of graphs including both acyclic/cyclic, directed/undirected labeled graphs. In particular, the model can realize adaptive contextual transductions, learning the mapping from graphs for both classification and regression tasks. In contrast to previous neural networks for structures that had a recursive dynamics, NN4G is based on a constructive feedforward architecture with state variables that uses neurons with no feedback connections. The neurons are applied to the input graphs by a general traversal process that relaxes the constraints of previous approaches derived by the causality assumption over hierarchical input data. Moreover, the incremental approach eliminates the need to introduce cyclic dependencies in the definition of the system state variables. In the traversal process, the NN4G units exploit (local) contextual information of the graphs vertices. In spite of the simplicity of the approach, we show that, through the compositionality of the contextual information developed by the learning, the model can deal with contextual information that is incrementally extended according to the graphs topology. The effectiveness and the generality of the new approach are investigated by analyzing its theoretical properties and providing experimental results.

  7. Multi-Layer and Recursive Neural Networks for Metagenomic Classification.

    Science.gov (United States)

    Ditzler, Gregory; Polikar, Robi; Rosen, Gail

    2015-09-01

    Recent advances in machine learning, specifically in deep learning with neural networks, has made a profound impact on fields such as natural language processing, image classification, and language modeling; however, feasibility and potential benefits of the approaches to metagenomic data analysis has been largely under-explored. Deep learning exploits many layers of learning nonlinear feature representations, typically in an unsupervised fashion, and recent results have shown outstanding generalization performance on previously unseen data. Furthermore, some deep learning methods can also represent the structure in a data set. Consequently, deep learning and neural networks may prove to be an appropriate approach for metagenomic data. To determine whether such approaches are indeed appropriate for metagenomics, we experiment with two deep learning methods: i) a deep belief network, and ii) a recursive neural network, the latter of which provides a tree representing the structure of the data. We compare these approaches to the standard multi-layer perceptron, which has been well-established in the machine learning community as a powerful prediction algorithm, though its presence is largely missing in metagenomics literature. We find that traditional neural networks can be quite powerful classifiers on metagenomic data compared to baseline methods, such as random forests. On the other hand, while the deep learning approaches did not result in improvements to the classification accuracy, they do provide the ability to learn hierarchical representations of a data set that standard classification methods do not allow. Our goal in this effort is not to determine the best algorithm in terms accuracy-as that depends on the specific application-but rather to highlight the benefits and drawbacks of each of the approach we discuss and provide insight on how they can be improved for predictive metagenomic analysis.

  8. Artificial neural network intelligent method for prediction

    Science.gov (United States)

    Trifonov, Roumen; Yoshinov, Radoslav; Pavlova, Galya; Tsochev, Georgi

    2017-09-01

    Accounting and financial classification and prediction problems are high challenge and researchers use different methods to solve them. Methods and instruments for short time prediction of financial operations using artificial neural network are considered. The methods, used for prediction of financial data as well as the developed forecasting system with neural network are described in the paper. The architecture of a neural network used four different technical indicators, which are based on the raw data and the current day of the week is presented. The network developed is used for forecasting movement of stock prices one day ahead and consists of an input layer, one hidden layer and an output layer. The training method is algorithm with back propagation of the error. The main advantage of the developed system is self-determination of the optimal topology of neural network, due to which it becomes flexible and more precise The proposed system with neural network is universal and can be applied to various financial instruments using only basic technical indicators as input data.

  9. Artificial Neural Networks and Instructional Technology.

    Science.gov (United States)

    Carlson, Patricia A.

    1991-01-01

    Artificial neural networks (ANN), part of artificial intelligence, are discussed. Such networks are fed sample cases (training sets), learn how to recognize patterns in the sample data, and use this experience in handling new cases. Two cognitive roles for ANNs (intelligent filters and spreading, associative memories) are examined. Prototypes…

  10. Learning drifting concepts with neural networks

    NARCIS (Netherlands)

    Biehl, Michael; Schwarze, Holm

    1993-01-01

    The learning of time-dependent concepts with a neural network is studied analytically and numerically. The linearly separable target rule is represented by an N-vector, whose time dependence is modelled by a random or deterministic drift process. A single-layer network is trained online using differ

  11. Estimating Conditional Distributions by Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1998-01-01

    Neural Networks for estimating conditionaldistributions and their associated quantiles are investigated in this paper. A basic network structure is developed on the basis of kernel estimation theory, and consistency property is considered from a mild set of assumptions. A number of applications...

  12. Artificial Neural Networks and Instructional Technology.

    Science.gov (United States)

    Carlson, Patricia A.

    1991-01-01

    Artificial neural networks (ANN), part of artificial intelligence, are discussed. Such networks are fed sample cases (training sets), learn how to recognize patterns in the sample data, and use this experience in handling new cases. Two cognitive roles for ANNs (intelligent filters and spreading, associative memories) are examined. Prototypes…

  13. Neural networks as perpetual information generators

    Science.gov (United States)

    Englisch, Harald; Xiao, Yegao; Yao, Kailun

    1991-07-01

    The information gain in a neural network cannot be larger than the bit capacity of the synapses. It is shown that the equation derived by Engel et al. [Phys. Rev. A 42, 4998 (1990)] for the strongly diluted network with persistent stimuli contradicts this condition. Furthermore, for any time step the correct equation is derived by taking the correlation between random variables into account.

  14. A quantum-implementable neural network model

    Science.gov (United States)

    Chen, Jialin; Wang, Lingli; Charbon, Edoardo

    2017-10-01

    A quantum-implementable neural network, namely quantum probability neural network (QPNN) model, is proposed in this paper. QPNN can use quantum parallelism to trace all possible network states to improve the result. Due to its unique quantum nature, this model is robust to several quantum noises under certain conditions, which can be efficiently implemented by the qubus quantum computer. Another advantage is that QPNN can be used as memory to retrieve the most relevant data and even to generate new data. The MATLAB experimental results of Iris data classification and MNIST handwriting recognition show that much less neuron resources are required in QPNN to obtain a good result than the classical feedforward neural network. The proposed QPNN model indicates that quantum effects are useful for real-life classification tasks.

  15. Neural Network Approaches to Visual Motion Perception

    Institute of Scientific and Technical Information of China (English)

    郭爱克; 杨先一

    1994-01-01

    This paper concerns certain difficult problems in image processing and perception: neuro-computation of visual motion information. The first part of this paper deals with the spatial physiological integration by the figure-ground discrimination neural network in the visual system of the fly. We have outlined the fundamental organization and algorithms of this neural network, and mainly concentrated on the results of computer simulations of spatial physiological integration. It has been shown that the gain control mechanism , the nonlinearity of synaptic transmission characteristic , the interaction between the two eyes , and the directional selectivity of the pool cells play decisive roles in the spatial physiological integration. In the second part, we have presented a self-organizing neural network for the perception of visual motion by using a retinotopic array of Reichardt’s motion detectors and Kohonen’s self-organizing maps. It .has been demonstrated by computer simulations that the network is abl

  16. Improving neural network performance on SIMD architectures

    Science.gov (United States)

    Limonova, Elena; Ilin, Dmitry; Nikolaev, Dmitry

    2015-12-01

    Neural network calculations for the image recognition problems can be very time consuming. In this paper we propose three methods of increasing neural network performance on SIMD architectures. The usage of SIMD extensions is a way to speed up neural network processing available for a number of modern CPUs. In our experiments, we use ARM NEON as SIMD architecture example. The first method deals with half float data type for matrix computations. The second method describes fixed-point data type for the same purpose. The third method considers vectorized activation functions implementation. For each method we set up a series of experiments for convolutional and fully connected networks designed for image recognition task.

  17. Stability analysis of discrete-time BAM neural networks based on standard neural network models

    Institute of Scientific and Technical Information of China (English)

    ZHANG Sen-lin; LIU Mei-qin

    2005-01-01

    To facilitate stability analysis of discrete-time bidirectional associative memory (BAM) neural networks, they were converted into novel neural network models, termed standard neural network models (SNNMs), which interconnect linear dynamic systems and bounded static nonlinear operators. By combining a number of different Lyapunov functionals with S-procedure, some useful criteria of global asymptotic stability and global exponential stability of the equilibrium points of SNNMs were derived. These stability conditions were formulated as linear matrix inequalities (LMIs). So global stability of the discrete-time BAM neural networks could be analyzed by using the stability results of the SNNMs. Compared to the existing stability analysis methods, the proposed approach is easy to implement, less conservative, and is applicable to other recurrent neural networks.

  18. Neural-networks-based Modelling and a Fuzzy Neural Networks Controller of MCFC

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Molten Carbonate Fuel Cells (MCFC) are produced with a highly efficient and clean power generation technology which will soon be widely utilized. The temperature characters of MCFC stack are briefly analyzed. A radial basis function (RBF) neural networks identification technology is applied to set up the temperature nonlinear model of MCFC stack, and the identification structure, algorithm and modeling training process are given in detail. A fuzzy controller of MCFC stack is designed. In order to improve its online control ability, a neural network trained by the I/O data of a fuzzy controller is designed. The neural networks can memorize and expand the inference rules of the fuzzy controller and substitute for the fuzzy controller to control MCFC stack online. A detailed design of the controller is given. The validity of MCFC stack modelling based on neural networks and the superior performance of the fuzzy neural networks controller are proved by Simulations.

  19. Doubly Optimal Secure Multicasting: Hierarchical Hybrid Communication Network : Disaster Relief

    CERN Document Server

    Garimella, Rama Murthy; Singhal, Deepti

    2011-01-01

    Recently, the world has witnessed the increasing occurrence of disasters, some of natural origin and others caused by man. The intensity of the phenomenon that cause such disasters, the frequency in which they occur, the number of people affected and the material damage caused by them have been growing substantially. Disasters are defined as natural, technological, and human-initiated events that disrupt the normal functioning of the economy and society on a large scale. Areas where disasters have occurred bring many dangers to rescue teams and the communication network infrastructure is usually destroyed. To manage these hazards, different wireless technologies can be launched in the area of disaster. This paper discusses the innovative wireless technologies for Disaster Management. Specifically, issues related to the design of Hierarchical Hybrid Communication Network (arising in the communication network for disaster relief) are discussed.

  20. Retrieval capabilities of hierarchical networks: from Dyson to Hopfield.

    Science.gov (United States)

    Agliari, Elena; Barra, Adriano; Galluzzi, Andrea; Guerra, Francesco; Tantari, Daniele; Tavani, Flavia

    2015-01-16

    We consider statistical-mechanics models for spin systems built on hierarchical structures, which provide a simple example of non-mean-field framework. We show that the coupling decay with spin distance can give rise to peculiar features and phase diagrams much richer than their mean-field counterpart. In particular, we consider the Dyson model, mimicking ferromagnetism in lattices, and we prove the existence of a number of metastabilities, beyond the ordered state, which become stable in the thermodynamic limit. Such a feature is retained when the hierarchical structure is coupled with the Hebb rule for learning, hence mimicking the modular architecture of neurons, and gives rise to an associative network able to perform single pattern retrieval as well as multiple-pattern retrieval, depending crucially on the external stimuli and on the rate of interaction decay with distance; however, those emergent multitasking features reduce the network capacity with respect to the mean-field counterpart. The analysis is accomplished through statistical mechanics, Markov chain theory, signal-to-noise ratio technique, and numerical simulations in full consistency. Our results shed light on the biological complexity shown by real networks, and suggest future directions for understanding more realistic models.

  1. Category theoretic analysis of hierarchical protein materials and social networks.

    Directory of Open Access Journals (Sweden)

    David I Spivak

    Full Text Available Materials in biology span all the scales from Angstroms to meters and typically consist of complex hierarchical assemblies of simple building blocks. Here we describe an application of category theory to describe structural and resulting functional properties of biological protein materials by developing so-called ologs. An olog is like a "concept web" or "semantic network" except that it follows a rigorous mathematical formulation based on category theory. This key difference ensures that an olog is unambiguous, highly adaptable to evolution and change, and suitable for sharing concepts with other olog. We consider simple cases of beta-helical and amyloid-like protein filaments subjected to axial extension and develop an olog representation of their structural and resulting mechanical properties. We also construct a representation of a social network in which people send text-messages to their nearest neighbors and act as a team to perform a task. We show that the olog for the protein and the olog for the social network feature identical category-theoretic representations, and we proceed to precisely explicate the analogy or isomorphism between them. The examples presented here demonstrate that the intrinsic nature of a complex system, which in particular includes a precise relationship between structure and function at different hierarchical levels, can be effectively represented by an olog. This, in turn, allows for comparative studies between disparate materials or fields of application, and results in novel approaches to derive functionality in the design of de novo hierarchical systems. We discuss opportunities and challenges associated with the description of complex biological materials by using ologs as a powerful tool for analysis and design in the context of materiomics, and we present the potential impact of this approach for engineering, life sciences, and medicine.

  2. Dynamic pricing by hopfield neural network

    Institute of Scientific and Technical Information of China (English)

    Lusajo M Minga; FENG Yu-qiang(冯玉强); LI Yi-jun(李一军); LU Yang(路杨); Kimutai Kimeli

    2004-01-01

    The increase in the number of shopbots users in e-commerce has triggered flexibility of sellers in their pricing strategies. Sellers see the importance of automated price setting which provides efficient services to a large number of buyers who are using shopbots. This paper studies the characteristic of decreasing energy with time in a continuous model of a Hopfield neural network that is the decreasing of errors in the network with respect to time. The characteristic shows that it is possible to use Hopfield neural network to get the main factor of dynamic pricing; the least variable cost, from production function principles. The least variable cost is obtained by reducing or increasing the input combination factors, and then making the comparison of the network output with the desired output, where the difference between the network output and desired output will be decreasing in the same manner as in the Hopfield neural network energy. Hopfield neural network will simplify the rapid change of prices in e-commerce during transaction that depends on the demand quantity for demand sensitive model of pricing.

  3. Neutron spectrometry with artificial neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.; Rodriguez, J.M.; Mercado S, G.A. [Universidad Autonoma de Zacatecas, A.P. 336, 98000 Zacatecas (Mexico); Iniguez de la Torre Bayo, M.P. [Universidad de Valladolid, Valladolid (Spain); Barquero, R. [Hospital Universitario Rio Hortega, Valladolid (Spain); Arteaga A, T. [Envases de Zacatecas, S.A. de C.V., Zacatecas (Mexico)]. e-mail: rvega@cantera.reduaz.mx

    2005-07-01

    An artificial neural network has been designed to obtain the neutron spectra from the Bonner spheres spectrometer's count rates. The neural network was trained using 129 neutron spectra. These include isotopic neutron sources; reference and operational spectra from accelerators and nuclear reactors, spectra from mathematical functions as well as few energy groups and monoenergetic spectra. The spectra were transformed from lethargy to energy distribution and were re-bin ned to 31 energy groups using the MCNP 4C code. Re-binned spectra and UTA4 response matrix were used to calculate the expected count rates in Bonner spheres spectrometer. These count rates were used as input and the respective spectrum was used as output during neural network training. After training the network was tested with the Bonner spheres count rates produced by a set of neutron spectra. This set contains data used during network training as well as data not used. Training and testing was carried out in the Mat lab program. To verify the network unfolding performance the original and unfolded spectra were compared using the {chi}{sup 2}-test and the total fluence ratios. The use of Artificial Neural Networks to unfold neutron spectra in neutron spectrometry is an alternative procedure that overcomes the drawbacks associated in this ill-conditioned problem. (Author)

  4. Neural network technologies for image classification

    Science.gov (United States)

    Korikov, A. M.; Tungusova, A. V.

    2015-11-01

    We analyze the classes of problems with an objective necessity to use neural network technologies, i.e. representation and resolution problems in the neural network logical basis. Among these problems, image recognition takes an important place, in particular the classification of multi-dimensional data based on information about textural characteristics. These problems occur in aerospace and seismic monitoring, materials science, medicine and other. We reviewed different approaches for the texture description: statistical, structural, and spectral. We developed a neural network technology for resolving a practical problem of cloud image classification for satellite snapshots from the spectroradiometer MODIS. The cloud texture is described by the statistical characteristics of the GLCM (Gray Level Co- Occurrence Matrix) method. From the range of neural network models that might be applied for image classification, we chose the probabilistic neural network model (PNN) and developed an implementation which performs the classification of the main types and subtypes of clouds. Also, we chose experimentally the optimal architecture and parameters for the PNN model which is used for image classification.

  5. Representations in neural network based empirical potentials

    Science.gov (United States)

    Cubuk, Ekin D.; Malone, Brad D.; Onat, Berk; Waterland, Amos; Kaxiras, Efthimios

    2017-07-01

    Many structural and mechanical properties of crystals, glasses, and biological macromolecules can be modeled from the local interactions between atoms. These interactions ultimately derive from the quantum nature of electrons, which can be prohibitively expensive to simulate. Machine learning has the potential to revolutionize materials modeling due to its ability to efficiently approximate complex functions. For example, neural networks can be trained to reproduce results of density functional theory calculations at a much lower cost. However, how neural networks reach their predictions is not well understood, which has led to them being used as a "black box" tool. This lack of understanding is not desirable especially for applications of neural networks in scientific inquiry. We argue that machine learning models trained on physical systems can be used as more than just approximations since they had to "learn" physical concepts in order to reproduce the labels they were trained on. We use dimensionality reduction techniques to study in detail the representation of silicon atoms at different stages in a neural network, which provides insight into how a neural network learns to model atomic interactions.

  6. Using neural networks to describe tracer correlations

    Directory of Open Access Journals (Sweden)

    D. J. Lary

    2004-01-01

    Full Text Available Neural networks are ideally suited to describe the spatial and temporal dependence of tracer-tracer correlations. The neural network performs well even in regions where the correlations are less compact and normally a family of correlation curves would be required. For example, the CH4-N2O correlation can be well described using a neural network trained with the latitude, pressure, time of year, and methane volume mixing ratio (v.m.r.. In this study a neural network using Quickprop learning and one hidden layer with eight nodes was able to reproduce the CH4-N2O correlation with a correlation coefficient between simulated and training values of 0.9995. Such an accurate representation of tracer-tracer correlations allows more use to be made of long-term datasets to constrain chemical models. Such as the dataset from the Halogen Occultation Experiment (HALOE which has continuously observed CH4  (but not N2O from 1991 till the present. The neural network Fortran code used is available for download.

  7. Estimates on compressed neural networks regression.

    Science.gov (United States)

    Zhang, Yongquan; Li, Youmei; Sun, Jianyong; Ji, Jiabing

    2015-03-01

    When the neural element number n of neural networks is larger than the sample size m, the overfitting problem arises since there are more parameters than actual data (more variable than constraints). In order to overcome the overfitting problem, we propose to reduce the number of neural elements by using compressed projection A which does not need to satisfy the condition of Restricted Isometric Property (RIP). By applying probability inequalities and approximation properties of the feedforward neural networks (FNNs), we prove that solving the FNNs regression learning algorithm in the compressed domain instead of the original domain reduces the sample error at the price of an increased (but controlled) approximation error, where the covering number theory is used to estimate the excess error, and an upper bound of the excess error is given.

  8. Community structure of complex networks based on continuous neural network

    Science.gov (United States)

    Dai, Ting-ting; Shan, Chang-ji; Dong, Yan-shou

    2017-09-01

    As a new subject, the research of complex networks has attracted the attention of researchers from different disciplines. Community structure is one of the key structures of complex networks, so it is a very important task to analyze the community structure of complex networks accurately. In this paper, we study the problem of extracting the community structure of complex networks, and propose a continuous neural network (CNN) algorithm. It is proved that for any given initial value, the continuous neural network algorithm converges to the eigenvector of the maximum eigenvalue of the network modularity matrix. Therefore, according to the stability of the evolution of the network symbol will be able to get two community structure.

  9. Identification and Position Control of Marine Helm using Artificial Neural Network Neural Network

    Directory of Open Access Journals (Sweden)

    Hui ZHU

    2008-02-01

    Full Text Available If nonlinearities such as saturation of the amplifier gain and motor torque, gear backlash, and shaft compliances- just to name a few - are considered in the position control system of marine helm, traditional control methods are no longer sufficient to be used to improve the performance of the system. In this paper an alternative approach to traditional control methods - a neural network reference controller - is proposed to establish an adaptive control of the position of the marine helm to achieve the controlled variable at the command position. This neural network controller comprises of two neural networks. One is the plant model network used to identify the nonlinear system and the other the controller network used to control the output to follow the reference model. The experimental results demonstrate that this adaptive neural network reference controller has much better control performance than is obtained with traditional controllers.

  10. Digital systems for artificial neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Atlas, L.E. (Interactive Systems Design Lab., Univ. of Washington, WA (US)); Suzuki, Y. (NTT Human Interface Labs. (US))

    1989-11-01

    A tremendous flurry of research activity has developed around artificial neural systems. These systems have also been tested in many applications, often with positive results. Most of this work has taken place as digital simulations on general-purpose serial or parallel digital computers. Specialized neural network emulation systems have also been developed for more efficient learning and use. The authors discussed how dedicated digital VLSI integrated circuits offer the highest near-term future potential for this technology.

  11. Genomic analysis of the hierarchical structure of regulatory networks

    Science.gov (United States)

    Yu, Haiyuan; Gerstein, Mark

    2006-01-01

    A fundamental question in biology is how the cell uses transcription factors (TFs) to coordinate the expression of thousands of genes in response to various stimuli. The relationships between TFs and their target genes can be modeled in terms of directed regulatory networks. These relationships, in turn, can be readily compared with commonplace “chain-of-command” structures in social networks, which have characteristic hierarchical layouts. Here, we develop algorithms for identifying generalized hierarchies (allowing for various loop structures) and use these approaches to illuminate extensive pyramid-shaped hierarchical structures existing in the regulatory networks of representative prokaryotes (Escherichia coli) and eukaryotes (Saccharomyces cerevisiae), with most TFs at the bottom levels and only a few master TFs on top. These masters are situated near the center of the protein–protein interaction network, a different type of network from the regulatory one, and they receive most of the input for the whole regulatory hierarchy through protein interactions. Moreover, they have maximal influence over other genes, in terms of affecting expression-level changes. Surprisingly, however, TFs at the bottom of the regulatory hierarchy are more essential to the viability of the cell. Finally, one might think master TFs achieve their wide influence through directly regulating many targets, but TFs with most direct targets are in the middle of the hierarchy. We find, in fact, that these midlevel TFs are “control bottlenecks” in the hierarchy, and this great degree of control for “middle managers” has parallels in efficient social structures in various corporate and governmental settings. PMID:17003135

  12. A HIERARCHICAL INTRUSION DETECTION ARCHITECTURE FOR WIRELESS SENSOR NETWORKS

    Directory of Open Access Journals (Sweden)

    Hossein Jadidoleslamy

    2011-10-01

    Full Text Available Networks protection against different types of attacks is one of most important posed issue into the network andinformation security application domains. This problem on Wireless Sensor Networks (WSNs, in attention to theirspecial properties, has more importance. Now, there are some of proposed architectures and guide lines to protectWireless Sensor Networks (WSNs against different types of intrusions; but any one of them do not has acomprehensive view to this problem and they are usually designed and implemented in single-purpose; but, theproposed design in this paper tries to has been a comprehensive view to this issue by presenting a complete andcomprehensive Intrusion Detection Architecture (IDA. The main contribution of this architecture is its hierarchicalstructure; i.e., it is designed and applicable, in one or two levels, consistent to the application domain and itsrequired security level. Focus of this paper is on the clustering WSNs, designing and deploying Cluster-basedIntrusion Detection System (CIDS on cluster-heads and Wireless Sensor Network wide level Intrusion DetectionSystem (WSNIDS on the central server. Suppositions of the WSN and Intrusion Detection Architecture (IDA are:static and heterogeneous network, hierarchical and clustering structure, clusters' overlapping and using hierarchicalrouting protocol such as LEACH, but along with minor changes. Finally, the proposed idea has been verified bydesigning a questionnaire, representing it to some (about 50 people experts and then, analyzing and evaluating itsacquired results.

  13. Equivalence of Conventional and Modified Network of Generalized Neural Elements

    Directory of Open Access Journals (Sweden)

    E. V. Konovalov

    2016-01-01

    Full Text Available The article is devoted to the analysis of neural networks consisting of generalized neural elements. The first part of the article proposes a new neural network model — a modified network of generalized neural elements (MGNE-network. This network developes the model of generalized neural element, whose formal description contains some flaws. In the model of the MGNE-network these drawbacks are overcome. A neural network is introduced all at once, without preliminary description of the model of a single neural element and method of such elements interaction. The description of neural network mathematical model is simplified and makes it relatively easy to construct on its basis a simulation model to conduct numerical experiments. The model of the MGNE-network is universal, uniting properties of networks consisting of neurons-oscillators and neurons-detectors. In the second part of the article we prove the equivalence of the dynamics of the two considered neural networks: the network, consisting of classical generalized neural elements, and MGNE-network. We introduce the definition of equivalence in the functioning of the generalized neural element and the MGNE-network consisting of a single element. Then we introduce the definition of the equivalence of the dynamics of the two neural networks in general. It is determined the correlation of different parameters of the two considered neural network models. We discuss the issue of matching the initial conditions of the two considered neural network models. We prove the theorem about the equivalence of the dynamics of the two considered neural networks. This theorem allows us to apply all previously obtained results for the networks, consisting of classical generalized neural elements, to the MGNE-network.

  14. Implementing Signature Neural Networks with Spiking Neurons.

    Science.gov (United States)

    Carrillo-Medina, José Luis; Latorre, Roberto

    2016-01-01

    Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm-i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data-to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the absence

  15. Implementing Signature Neural Networks with Spiking Neurons

    Science.gov (United States)

    Carrillo-Medina, José Luis; Latorre, Roberto

    2016-01-01

    Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm—i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data—to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the

  16. Network Traffic Prediction based on Particle Swarm BP Neural Network

    Directory of Open Access Journals (Sweden)

    Yan Zhu

    2013-11-01

    Full Text Available The traditional BP neural network algorithm has some bugs such that it is easy to fall into local minimum and the slow convergence speed. Particle swarm optimization is an evolutionary computation technology based on swarm intelligence which can not guarantee global convergence. Artificial Bee Colony algorithm is a global optimum algorithm with many advantages such as simple, convenient and strong robust. In this paper, a new BP neural network based on Artificial Bee Colony algorithm and particle swarm optimization algorithm is proposed to optimize the weight and threshold value of BP neural network. After network traffic prediction experiment, we can conclude that optimized BP network traffic prediction based on PSO-ABC has high prediction accuracy and has stable prediction performance.

  17. Training Deep Spiking Neural Networks Using Backpropagation.

    Science.gov (United States)

    Lee, Jun Haeng; Delbruck, Tobi; Pfeiffer, Michael

    2016-01-01

    Deep spiking neural networks (SNNs) hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise. This enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membrane potentials. Compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statistics of spikes more precisely. We evaluate the proposed framework on artificially generated events from the original MNIST handwritten digit benchmark, and also on the N-MNIST benchmark recorded with an event-based dynamic vision sensor, in which the proposed method reduces the error rate by a factor of more than three compared to the best previous SNN, and also achieves a higher accuracy than a conventional convolutional neural network (CNN) trained and tested on the same data. We demonstrate in the context of the MNIST task that thanks to their event-driven operation, deep SNNs (both fully connected and convolutional) trained with our method achieve accuracy equivalent with conventional neural networks. In the N-MNIST example, equivalent accuracy is achieved with about five times fewer computational operations.

  18. Foreign currency rate forecasting using neural networks

    Science.gov (United States)

    Pandya, Abhijit S.; Kondo, Tadashi; Talati, Amit; Jayadevappa, Suryaprasad

    2000-03-01

    Neural networks are increasingly being used as a forecasting tool in many forecasting problems. This paper discusses the application of neural networks in predicting daily foreign exchange rates between the USD, GBP as well as DEM. We approach the problem from a time-series analysis framework - where future exchange rates are forecasted solely using past exchange rates. This relies on the belief that the past prices and future prices are very close related, and interdependent. We present the result of training a neural network with historical USD-GBP data. The methodology used in explained, as well as the training process. We discuss the selection of inputs to the network, and present a comparison of using the actual exchange rates and the exchange rate differences as inputs. Price and rate differences are the preferred way of training neural network in financial applications. Results of both approaches are present together for comparison. We show that the network is able to learn the trends in the exchange rate movements correctly, and present the results of the prediction over several periods of time.

  19. Training Deep Spiking Neural Networks using Backpropagation

    Directory of Open Access Journals (Sweden)

    Jun Haeng Lee

    2016-11-01

    Full Text Available Deep spiking neural networks (SNNs hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise. This enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membrane potentials. Compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statistics of spikes more precisely. We evaluate the proposed framework on artificially generated events from the original MNIST handwritten digit benchmark, and also on the N-MNIST benchmark recorded with an event-based dynamic vision sensor, in which the proposed method reduces the error rate by a factor of more than three compared to the best previous SNN, and also achieves a higher accuracy than a conventional convolutional neural network (CNN trained and tested on the same data. We demonstrate in the context of the MNIST task that thanks to their event-driven operation, deep SNNs (both fully connected and convolutional trained with our method achieve accuracy equivalent with conventional neural networks. In the N-MNIST example, equivalent accuracy is achieved with about five times fewer computational operations.

  20. Kannada character recognition system using neural network

    Science.gov (United States)

    Kumar, Suresh D. S.; Kamalapuram, Srinivasa K.; Kumar, Ajay B. R.

    2013-03-01

    Handwriting recognition has been one of the active and challenging research areas in the field of pattern recognition. It has numerous applications which include, reading aid for blind, bank cheques and conversion of any hand written document into structural text form. As there is no sufficient number of works on Indian language character recognition especially Kannada script among 15 major scripts in India. In this paper an attempt is made to recognize handwritten Kannada characters using Feed Forward neural networks. A handwritten Kannada character is resized into 20x30 Pixel. The resized character is used for training the neural network. Once the training process is completed the same character is given as input to the neural network with different set of neurons in hidden layer and their recognition accuracy rate for different Kannada characters has been calculated and compared. The results show that the proposed system yields good recognition accuracy rates comparable to that of other handwritten character recognition systems.

  1. Parameter estimation using compensatory neural networks

    Indian Academy of Sciences (India)

    M Sinha; P K Kalra; K Kumar

    2000-04-01

    Proposed here is a new neuron model, a basis for Compensatory Neural Network Architecture (CNNA), which not only reduces the total number of interconnections among neurons but also reduces the total computing time for training. The suggested model has properties of the basic neuron model as well as the higher neuron model (multiplicative aggregation function). It can adapt to standard neuron and higher order neuron, as well as a combination of the two. This approach is found to estimate the orbit with accuracy significantly better than Kalman Filter (KF) and Feedforward Multilayer Neural Network (FMNN) (also simply referred to as Artificial Neural Network, ANN) with lambda-gamma learning. The typical simulation runs also bring out the superiority of the proposed scheme over Kalman filter from the standpoint of computation time and the amount of data needed for the desired degree of estimated accuracy for the specific problem of orbit determination.

  2. Assessing Landslide Hazard Using Artificial Neural Network

    DEFF Research Database (Denmark)

    Farrokhzad, Farzad; Choobbasti, Asskar Janalizadeh; Barari, Amin

    2011-01-01

    neural network has been developed for use in the stability evaluation of slopes under various geological conditions and engineering requirements. The Artificial neural network model of this research uses slope characteristics as input and leads to the output in form of the probability of failure...... and factor of safety. It can be stated that the trained neural networks are capable of predicting the stability of slopes and safety factor of landslide hazard in study area with an acceptable level of confidence. Landslide hazard analysis and mapping can provide useful information for catastrophic loss...... failure" which is main concentration of the current research and "liquefaction failure". Shear failures along shear planes occur when the shear stress along the sliding surfaces exceed the effective shear strength. These slides have been referred to as landslide. An expert system based on artificial...

  3. Recurrent Neural Network for Computing Outer Inverse.

    Science.gov (United States)

    Živković, Ivan S; Stanimirović, Predrag S; Wei, Yimin

    2016-05-01

    Two linear recurrent neural networks for generating outer inverses with prescribed range and null space are defined. Each of the proposed recurrent neural networks is based on the matrix-valued differential equation, a generalization of dynamic equations proposed earlier for the nonsingular matrix inversion, the Moore-Penrose inversion, as well as the Drazin inversion, under the condition of zero initial state. The application of the first approach is conditioned by the properties of the spectrum of a certain matrix; the second approach eliminates this drawback, though at the cost of increasing the number of matrix operations. The cases corresponding to the most common generalized inverses are defined. The conditions that ensure stability of the proposed neural network are presented. Illustrative examples present the results of numerical simulations.

  4. Classification of radar clutter using neural networks.

    Science.gov (United States)

    Haykin, S; Deng, C

    1991-01-01

    A classifier that incorporates both preprocessing and postprocessing procedures as well as a multilayer feedforward network (based on the back-propagation algorithm) in its design to distinguish between several major classes of radar returns including weather, birds, and aircraft is described. The classifier achieves an average classification accuracy of 89% on generalization for data collected during a single scan of the radar antenna. The procedures of feature selection for neural network training, the classifier design considerations, the learning algorithm development, the implementation, and the experimental results of the neural clutter classifier, which is simulated on a Warp systolic computer, are discussed. A comparative evaluation of the multilayer neural network with a traditional Bayes classifier is presented.

  5. Cotton genotypes selection through artificial neural networks.

    Science.gov (United States)

    Júnior, E G Silva; Cardoso, D B O; Reis, M C; Nascimento, A F O; Bortolin, D I; Martins, M R; Sousa, L B

    2017-09-27

    Breeding programs currently use statistical analysis to assist in the identification of superior genotypes at various stages of a cultivar's development. Differently from these analyses, the computational intelligence approach has been little explored in genetic improvement of cotton. Thus, this study was carried out with the objective of presenting the use of artificial neural networks as auxiliary tools in the improvement of the cotton to improve fiber quality. To demonstrate the applicability of this approach, this research was carried out using the evaluation data of 40 genotypes. In order to classify the genotypes for fiber quality, the artificial neural networks were trained with replicate data of 20 genotypes of cotton evaluated in the harvests of 2013/14 and 2014/15, regarding fiber length, uniformity of length, fiber strength, micronaire index, elongation, short fiber index, maturity index, reflectance degree, and fiber quality index. This quality index was estimated by means of a weighted average on the determined score (1 to 5) of each characteristic of the HVI evaluated, according to its industry standards. The artificial neural networks presented a high capacity of correct classification of the 20 selected genotypes based on the fiber quality index, so that when using fiber length associated with the short fiber index, fiber maturation, and micronaire index, the artificial neural networks presented better results than using only fiber length and previous associations. It was also observed that to submit data of means of new genotypes to the neural networks trained with data of repetition, provides better results of classification of the genotypes. When observing the results obtained in the present study, it was verified that the artificial neural networks present great potential to be used in the different stages of a genetic improvement program of the cotton, aiming at the improvement of the fiber quality of the future cultivars.

  6. Neural networks and particle physics

    CERN Document Server

    Peterson, Carsten

    1993-01-01

    1. Introduction : Structure of the Central Nervous System Generics2. Feed-forward networks, Perceptions, Function approximators3. Self-organisation, Feature Maps4. Feed-back Networks, The Hopfield model, Optimization problems, Feed-back, Networks, Deformable templates, Graph bisection

  7. [A medical image semantic modeling based on hierarchical Bayesian networks].

    Science.gov (United States)

    Lin, Chunyi; Ma, Lihong; Yin, Junxun; Chen, Jianyu

    2009-04-01

    A semantic modeling approach for medical image semantic retrieval based on hierarchical Bayesian networks was proposed, in allusion to characters of medical images. It used GMM (Gaussian mixture models) to map low-level image features into object semantics with probabilities, then it captured high-level semantics through fusing these object semantics using a Bayesian network, so that it built a multi-layer medical image semantic model, aiming to enable automatic image annotation and semantic retrieval by using various keywords at different semantic levels. As for the validity of this method, we have built a multi-level semantic model from a small set of astrocytoma MRI (magnetic resonance imaging) samples, in order to extract semantics of astrocytoma in malignant degree. Experiment results show that this is a superior approach.

  8. Spatially Resolved Monitoring of Drying of Hierarchical Porous Organic Networks.

    Science.gov (United States)

    Velasco, Manuel Isaac; Silletta, Emilia V; Gomez, Cesar G; Strumia, Miriam C; Stapf, Siegfried; Monti, Gustavo Alberto; Mattea, Carlos; Acosta, Rodolfo H

    2016-03-01

    Evaporation kinetics of water confined in hierarchal polymeric porous media is studied by low field nuclear magnetic resonance (NMR). Systems synthesized with various degrees of cross-linker density render networks with similar pore sizes but different response when soaked with water. Polymeric networks with low percentage of cross-linker can undergo swelling, which affects the porosity as well as the drying kinetics. The drying process is monitored macroscopically by single-sided NMR, with spatial resolution of 100 μm, while microscopic information is obtained by measurements of spin-spin relaxation times (T2). Transition from a funicular to a pendular regime, where hydraulic connectivity is lost and the capillary flow cannot compensate for the surface evaporation, can be observed from inspection of the water content in different sample layers. Relaxation measurements indicate that even when the larger pore structures are depleted of water, capillary flow occurs through smaller voids.

  9. Architecture of the parallel hierarchical network for fast image recognition

    Science.gov (United States)

    Timchenko, Leonid; Wójcik, Waldemar; Kokriatskaia, Natalia; Kutaev, Yuriy; Ivasyuk, Igor; Kotyra, Andrzej; Smailova, Saule

    2016-09-01

    Multistage integration of visual information in the brain allows humans to respond quickly to most significant stimuli while maintaining their ability to recognize small details in the image. Implementation of this principle in technical systems can lead to more efficient processing procedures. The multistage approach to image processing includes main types of cortical multistage convergence. The input images are mapped into a flexible hierarchy that reflects complexity of image data. Procedures of the temporal image decomposition and hierarchy formation are described in mathematical expressions. The multistage system highlights spatial regularities, which are passed through a number of transformational levels to generate a coded representation of the image that encapsulates a structure on different hierarchical levels in the image. At each processing stage a single output result is computed to allow a quick response of the system. The result is presented as an activity pattern, which can be compared with previously computed patterns on the basis of the closest match. With regard to the forecasting method, its idea lies in the following. In the results synchronization block, network-processed data arrive to the database where a sample of most correlated data is drawn using service parameters of the parallel-hierarchical network.

  10. Implementation aspects of Graph Neural Networks

    Science.gov (United States)

    Barcz, A.; Szymański, Z.; Jankowski, S.

    2013-10-01

    This article summarises the results of implementation of a Graph Neural Network classi er. The Graph Neural Network model is a connectionist model, capable of processing various types of structured data, including non- positional and cyclic graphs. In order to operate correctly, the GNN model must implement a transition function being a contraction map, which is assured by imposing a penalty on model weights. This article presents research results concerning the impact of the penalty parameter on the model training process and the practical decisions that were made during the GNN implementation process.

  11. Livermore Big Artificial Neural Network Toolkit

    Energy Technology Data Exchange (ETDEWEB)

    2016-07-01

    LBANN is a toolkit that is designed to train artificial neural networks efficiently on high performance computing architectures. It is optimized to take advantages of key High Performance Computing features to accelerate neural network training. Specifically it is optimized for low-latency, high bandwidth interconnects, node-local NVRAM, node-local GPU accelerators, and high bandwidth parallel file systems. It is built on top of the open source Elemental distributed-memory dense and spars-direct linear algebra and optimization library that is released under the BSD license. The algorithms contained within LBANN are drawn from the academic literature and implemented to work within a distributed-memory framework.

  12. Spectral classification using convolutional neural networks

    CERN Document Server

    Hála, Pavel

    2014-01-01

    There is a great need for accurate and autonomous spectral classification methods in astrophysics. This thesis is about training a convolutional neural network (ConvNet) to recognize an object class (quasar, star or galaxy) from one-dimension spectra only. Author developed several scripts and C programs for datasets preparation, preprocessing and postprocessing of the data. EBLearn library (developed by Pierre Sermanet and Yann LeCun) was used to create ConvNets. Application on dataset of more than 60000 spectra yielded success rate of nearly 95%. This thesis conclusively proved great potential of convolutional neural networks and deep learning methods in astrophysics.

  13. Neural networks advances and applications 2

    CERN Document Server

    Gelenbe, E

    1992-01-01

    The present volume is a natural follow-up to Neural Networks: Advances and Applications which appeared one year previously. As the title indicates, it combines the presentation of recent methodological results concerning computational models and results inspired by neural networks, and of well-documented applications which illustrate the use of such models in the solution of difficult problems. The volume is balanced with respect to these two orientations: it contains six papers concerning methodological developments and five papers concerning applications and examples illustrating the theoret

  14. SAR ATR Based on Convolutional Neural Network

    Directory of Open Access Journals (Sweden)

    Tian Zhuangzhuang

    2016-06-01

    Full Text Available This study presents a new method of Synthetic Aperture Radar (SAR image target recognition based on a convolutional neural network. First, we introduce a class separability measure into the cost function to improve this network’s ability to distinguish between categories. Then, we extract SAR image features using the improved convolutional neural network and classify these features using a support vector machine. Experimental results using moving and stationary target acquisition and recognition SAR datasets prove the validity of this method.

  15. Contractor Prequalification Based on Neural Networks

    Institute of Scientific and Technical Information of China (English)

    ZHANG Jin-long; YANG Lan-rong

    2002-01-01

    Contractor Prequalification involves the screening of contractors by a project owner, according to a given set of criteria, in order to determine their competence to perform the work if awarded the construction contract. This paper introduces the capabilities of neural networks in solving problems related to contractor prequalification. The neural network systems for contractor prequalification has an input vector of 8 components and an output vector of 1 component. The output vector represents whether a contractor is qualified or not qualified to submit a bid on a project.

  16. Simulation of photosynthetic production using neural network

    Science.gov (United States)

    Kmet, Tibor; Kmetova, Maria

    2013-10-01

    This paper deals with neural network based optimal control synthesis for solving optimal control problems with control and state constraints and discrete time delay. The optimal control problem is transcribed into nonlinear programming problem which is implemented with adaptive critic neural network. This approach is applicable to a wide class of nonlinear systems. The proposed simulation methods is illustrated by the optimal control problem of photosynthetic production described by discrete time delay differential equations. Results show that adaptive critic based systematic approach holds promise for obtaining the optimal control with control and state constraints.

  17. Top tagging with deep neural networks [Vidyo

    CERN Document Server

    CERN. Geneva

    2017-01-01

    Recent literature on deep neural networks for top tagging has focussed on image based techniques or multivariate approaches using high level jet substructure variables. Here, we take a sequential approach to this task by using anordered sequence of energy deposits as training inputs. Unlike previous approaches, this strategy does not result in a loss of information during pixelization or the calculation of high level features. We also propose new preprocessing methods that do not alter key physical quantities such as jet mass. We compare the performance of this approach to standard tagging techniques and present results evaluating the robustness of the neural network to pileup.

  18. Intelligent neural network classifier for automatic testing

    Science.gov (United States)

    Bai, Baoxing; Yu, Heping

    1996-10-01

    This paper is concerned with an application of a multilayer feedforward neural network for the vision detection of industrial pictures, and introduces a high characteristics image processing and recognizing system which can be used for real-time testing blemishes, streaks and cracks, etc. on the inner walls of high-accuracy pipes. To take full advantage of the functions of the artificial neural network, such as the information distributed memory, large scale self-adapting parallel processing, high fault-tolerance ability, this system uses a multilayer perceptron as a regular detector to extract features of the images to be inspected and classify them.

  19. Speech Recognition Method Based on Multilayer Chaotic Neural Network

    Institute of Scientific and Technical Information of China (English)

    REN Xiaolin; HU Guangrui

    2001-01-01

    In this paper,speech recognitionusing neural networks is investigated.Especially,chaotic dynamics is introduced to neurons,and a mul-tilayer chaotic neural network (MLCNN) architectureis built.A learning algorithm is also derived to trainthe weights of the network.We apply the MLCNNto speech recognition and compare the performanceof the network with those of recurrent neural net-work (RNN) and time-delay neural network (TDNN).Experimental results show that the MLCNN methodoutperforms the other neural networks methods withrespect to average recognition rate.

  20. Reliable Point to Multipoint Hierarchical Routing in Scatternet Sensor Network

    Directory of Open Access Journals (Sweden)

    R.Dhaya

    2011-01-01

    Full Text Available In the recent development of communication, Bluetooth Scatternet wireless is a technology developed for wideband local accesses. Bluetooth technology is very popular because of its low cost and easy deployment which is based on IEEE 802.11standards. On the other hand Wireless Sensor Network (WSN consists of large number of sensor nodes distributed to monitor an environment and each node in a WSN consists of a small CPU, a sensing device and battery. Mostly, the sensor networks are distributed in an inconvenient location and it is difficult to recharge often. So routing in WSN is an important issue to consume energy and as well as to increase the life of the network, since a routing protocol finds the path between sources and sink. Moreover it is a challenging task to schedule the data between nodes in a scatternet in a congestive environment. Here this paper presents a new scheduling method for point to multi- point routing in Scatternet sensor network and the new dynamic routing method designed is cluster-based with hierarchical routing. The efficiency of this method is also compared in terms of energy consumption and the results show that the proposed routing is an energy efficient one which simultaneously increases the lifetime of the network.

  1. GSMNet: A Hierarchical Graph Model for Moving Objects in Networks

    Directory of Open Access Journals (Sweden)

    Hengcai Zhang

    2017-03-01

    Full Text Available Existing data models for moving objects in networks are often limited by flexibly controlling the granularity of representing networks and the cost of location updates and do not encompass semantic information, such as traffic states, traffic restrictions and social relationships. In this paper, we aim to fill the gap of traditional network-constrained models and propose a hierarchical graph model called the Geo-Social-Moving model for moving objects in Networks (GSMNet that adopts four graph structures, RouteGraph, SegmentGraph, ObjectGraph and MoveGraph, to represent the underlying networks, trajectories and semantic information in an integrated manner. The bulk of user-defined data types and corresponding operators is proposed to handle moving objects and answer a new class of queries supporting three kinds of conditions: spatial, temporal and semantic information. Then, we develop a prototype system with the native graph database system Neo4Jto implement the proposed GSMNet model. In the experiment, we conduct the performance evaluation using simulated trajectories generated from the BerlinMOD (Berlin Moving Objects Database benchmark and compare with the mature MOD system Secondo. The results of 17 benchmark queries demonstrate that our proposed GSMNet model has strong potential to reduce time-consuming table join operations an d shows remarkable advantages with regard to representing semantic information and controlling the cost of location updates.

  2. Multiprocessor Realization of Neural Networks

    Science.gov (United States)

    1990-04-01

    the unique capabilities of receiving, processing, and transmitting electo-chemical signals. These signals are sent over neural pathways that make up...these switching nodes and a clever arrangement of internode links to guaranteee at least one’ path between each processor and memory. These types of

  3. Optically excited synapse for neural networks.

    Science.gov (United States)

    Boyd, G D

    1987-07-15

    What can optics with its promise of parallelism do for neural networks which require matrix multipliers? An all optical approach requires optical logic devices which are still in their infancy. An alternative is to retain electronic logic while optically addressing the synapse matrix. This paper considers several versions of an optically addressed neural network compatible with VLSI that could be fabricated with the synapse connection unspecified. This optical matrix multiplier circuit is compared to an all electronic matrix multiplier. For the optical version a synapse consisting of back-to-back photodiodes is found to have a suitable i-v characteristic for optical matrix multiplication (a linear region) plus a clipping or nonlinear region as required for neural networks. Four photodiodes per synapse are required. The strength of the synapse connection is controlled by the optical power and is thus an adjustable parameter. The synapse network can be programmed in various ways such as a shadow mask of metal, imaged mask (static), or light valve or an acoustooptic scanned laser beam or array of beams (dynamic). A milliwatt from LEDs or lasers is adequate power. The neuron has a linear transfer function and is either a summing amplifier, in which case the synapse signal is current, or an integrator, in which case the synapse signal is charge, the choice of which depends on the programming mode. Optical addressing and settling times of microseconds are anticipated. Electronic neural networks using single-value resistor synapses or single-bit programmable synapses have been demonstrated in the high-gain region of discrete single-value feedback. As an alternative to these networks and the above proposed optical synapses, an electronic analog-voltage vector matrix multiplier is considered using MOSFETS as the variable conductance in CMOS VLSI. It is concluded that a shadow mask addressed (static) optical neural network is promising.

  4. Porosity Log Prediction Using Artificial Neural Network

    Science.gov (United States)

    Dwi Saputro, Oki; Lazuardi Maulana, Zulfikar; Dzar Eljabbar Latief, Fourier

    2016-08-01

    Well logging is important in oil and gas exploration. Many physical parameters of reservoir is derived from well logging measurement. Geophysicists often use well logging to obtain reservoir properties such as porosity, water saturation and permeability. Most of the time, the measurement of the reservoir properties are considered expensive. One of method to substitute the measurement is by conducting a prediction using artificial neural network. In this paper, artificial neural network is performed to predict porosity log data from other log data. Three well from ‘yy’ field are used to conduct the prediction experiment. The log data are sonic, gamma ray, and porosity log. One of three well is used as training data for the artificial neural network which employ the Levenberg-Marquardt Backpropagation algorithm. Through several trials, we devise that the most optimal input training is sonic log data and gamma ray log data with 10 hidden layer. The prediction result in well 1 has correlation of 0.92 and mean squared error of 5.67 x10-4. Trained network apply to other well data. The result show that correlation in well 2 and well 3 is 0.872 and 0.9077 respectively. Mean squared error in well 2 and well 3 is 11 x 10-4 and 9.539 x 10-4. From the result we can conclude that sonic log and gamma ray log could be good combination for predicting porosity with neural network.

  5. Autonomous robot behavior based on neural networks

    Science.gov (United States)

    Grolinger, Katarina; Jerbic, Bojan; Vranjes, Bozo

    1997-04-01

    The purpose of autonomous robot is to solve various tasks while adapting its behavior to the variable environment, expecting it is able to navigate much like a human would, including handling uncertain and unexpected obstacles. To achieve this the robot has to be able to find solution to unknown situations, to learn experienced knowledge, that means action procedure together with corresponding knowledge on the work space structure, and to recognize working environment. The planning of the intelligent robot behavior presented in this paper implements the reinforcement learning based on strategic and random attempts for finding solution and neural network approach for memorizing and recognizing work space structure (structural assignment problem). Some of the well known neural networks based on unsupervised learning are considered with regard to the structural assignment problem. The adaptive fuzzy shadowed neural network is developed. It has the additional shadowed hidden layer, specific learning rule and initialization phase. The developed neural network combines advantages of networks based on the Adaptive Resonance Theory and using shadowed hidden layer provides ability to recognize lightly translated or rotated obstacles in any direction.

  6. Exploiting network redundancy for low-cost neural network realizations.

    NARCIS (Netherlands)

    Keegstra, H; Jansen, WJ; Nijhuis, JAG; Spaanenburg, L; Stevens, H; Udding, JT

    1996-01-01

    A method is presented to optimize a trained neural network for physical realization styles. Target architectures are embedded microcontrollers or standard cell based ASIC designs. The approach exploits the redundancy in the network, required for successful training, to replace the synaptic weighting

  7. Neutron spectrum unfolding using neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E. [Universidad Autonoma de Zacatecas, A.P. 336, 98000 Zacatecas (Mexico)]. E-mail: rvega@cantera.reduaz.mx

    2004-07-01

    An artificial neural network has been designed to obtain the neutron spectra from the Bonner spheres spectrometer's count rates. The neural network was trained using a large set of neutron spectra compiled by the International Atomic Energy Agency. These include spectra from iso- topic neutron sources, reference and operational neutron spectra obtained from accelerators and nuclear reactors. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. Re-binned spectra and UTA4 matrix were used to calculate the expected count rates in Bonner spheres spectrometer. These count rates were used as input and correspondent spectrum was used as output during neural network training. The network has 7 input nodes, 56 neurons as hidden layer and 31 neurons in the output layer. After training the network was tested with the Bonner spheres count rates produced by twelve neutron spectra. The network allows unfolding the neutron spectrum from count rates measured with Bonner spheres. Good results are obtained when testing count rates belong to neutron spectra used during training, acceptable results are obtained for count rates obtained from actual neutron fields; however the network fails when count rates belong to monoenergetic neutron sources. (Author)

  8. Analysis of Recurrent Analog Neural Networks

    Directory of Open Access Journals (Sweden)

    Z. Raida

    1998-06-01

    Full Text Available In this paper, an original rigorous analysis of recurrent analog neural networks, which are built from opamp neurons, is presented. The analysis, which comes from the approximate model of the operational amplifier, reveals causes of possible non-stable states and enables to determine convergence properties of the network. Results of the analysis are discussed in order to enable development of original robust and fast analog networks. In the analysis, the special attention is turned to the examination of the influence of real circuit elements and of the statistical parameters of processed signals to the parameters of the network.

  9. Predicting Water Levels at Kainji Dam Using Artificial Neural Networks

    African Journals Online (AJOL)

    Predicting Water Levels at Kainji Dam Using Artificial Neural Networks. ... The aim of this study is to develop artificial neural network models for predicting water levels at Kainji Dam, which supplies water to Nigeria's largest ... Article Metrics.

  10. Parameter Identification by Bayes Decision and Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1994-01-01

    The problem of parameter identification by Bayes point estimation using neural networks is investigated.......The problem of parameter identification by Bayes point estimation using neural networks is investigated....

  11. Development of programmable artificial neural networks

    Science.gov (United States)

    Meade, Andrew J.

    1993-01-01

    Conventionally programmed digital computers can process numbers with great speed and precision, but do not easily recognize patterns or imprecise or contradictory data. Instead of being programmed in the conventional sense, artificial neural networks are capable of self-learning through exposure to repeated examples. However, the training of an ANN can be a time consuming and unpredictable process. A general method is being developed to mate the adaptability of the ANN with the speed and precision of the digital computer. This method was successful in building feedforward networks that can approximate functions and their partial derivatives from examples in a single iteration. The general method also allows the formation of feedforward networks that can approximate the solution to nonlinear ordinary and partial differential equations to desired accuracy without the need of examples. It is believed that continued research will produce artificial neural networks that can be used with confidence in practical scientific computing and engineering applications.

  12. Sparse neural networks with large learning diversity

    CERN Document Server

    Gripon, Vincent

    2011-01-01

    Coded recurrent neural networks with three levels of sparsity are introduced. The first level is related to the size of messages, much smaller than the number of available neurons. The second one is provided by a particular coding rule, acting as a local constraint in the neural activity. The third one is a characteristic of the low final connection density of the network after the learning phase. Though the proposed network is very simple since it is based on binary neurons and binary connections, it is able to learn a large number of messages and recall them, even in presence of strong erasures. The performance of the network is assessed as a classifier and as an associative memory.

  13. The labeled systems of multiple neural networks.

    Science.gov (United States)

    Nemissi, M; Seridi, H; Akdag, H

    2008-08-01

    This paper proposes an implementation scheme of K-class classification problem using systems of multiple neural networks. Usually, a multi-class problem is decomposed into simple sub-problems solved independently using similar single neural networks. For the reason that these sub-problems are not equivalent in their complexity, we propose a system that includes reinforced networks destined to solve complicated parts of the entire problem. Our approach is inspired from principles of the multi-classifiers systems and the labeled classification, which aims to improve performances of the networks trained by the Back-Propagation algorithm. We propose two implementation schemes based on both OAO (one-against-all) and OAA (one-against-one). The proposed models are evaluated using iris and human thigh databases.

  14. Neural network adaptive control and vibration hierarchical fuzzy control of flexible arm space robot%柔性臂空间机器人的神经网络自适应控制及振动模态分级模糊控制

    Institute of Scientific and Technical Information of China (English)

    梁捷; 陈力; 梁频

    2012-01-01

    slow-subsystem, Radial Basis Function (RBF) neural network control algorithm with uncertain parameters was designed to dominate the trajectory tracking of coordinated motion. The research purpose of neural network control algorithm was to improve the control accuracy of whole system based on good on-line self-learning of neural network. For fast-subsystem, hierarchical fuzzy control algorithm was used to control the vibration of flexible link. The research purpose of hierarchical fuzzy control algorithm was to reduce the size of fuzzy rule base, and raises the calculation efficiency of fuzzy controller effectively. Computer simulation results illustrated the effectiveness and feasibility of proposed algorithms.

  15. Implementing Signature Neural Networks with Spiking Neurons

    Directory of Open Access Journals (Sweden)

    José Luis Carrillo-Medina

    2016-12-01

    Full Text Available Spiking Neural Networks constitute the most promising approach to develop realistic ArtificialNeural Networks (ANNs. Unlike traditional firing rate-based paradigms, information coding inspiking models is based on the precise timing of individual spikes. Spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition. In recent years, majorbreakthroughs in neuroscience research have discovered new relevant computational principles indifferent living neural systems. Could ANNs benefit from some of these recent findings providingnovel elements of inspiration? This is an intriguing question and the development of spiking ANNsincluding novel bio-inspired information coding and processing strategies is gaining attention. Fromthis perspective, in this work, we adapt the core concepts of the recently proposed SignatureNeural Network paradigm – i.e., neural signatures to identify each unit in the network, localinformation contextualization during the processing and multicoding strategies for informationpropagation regarding the origin and the content of the data – to be employed in a spiking neuralnetwork. To the best of our knowledge, none of these mechanisms have been used yet in thecontext of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicabilityin such networks. Computer simulations show that a simple network model like the discussed hereexhibits complex self-organizing properties. The combination of multiple simultaneous encodingschemes allows the network to generate coexisting spatio-temporal patterns of activity encodinginformation in different spatio-temporal spaces. As a function of the network and/or intra-unitparameters shaping the corresponding encoding modality, different forms of competition amongthe evoked patterns can emerge even in the absence of inhibitory connections. These parametersalso

  16. Performance Comparison of Neural Networks for HRTFs Approximation

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    In order to approach to head-related transfer functions (HRTFs), this paper employs and compares three kinds of one-input neural network models, namely, multi-layer perceptron (MLP) networks, radial basis function (RBF) networks and wavelet neural networks (WNN) so as to select the best network model for further HRTFs approximation. Experimental results demonstrate that wavelet neural networks are more efficient and useful.

  17. Applications of Neural Networks in Spinning Prediction

    Institute of Scientific and Technical Information of China (English)

    程文红; 陆凯

    2003-01-01

    The neural network spinning prediction model (BP and RBF Networks) trained by data from the mill can predict yarn qualities and spinning performance. The input parameters of the model are as follows: yarn count, diameter, hauteur, bundle strength, spinning draft, spinning speed, traveler number and twist.And the output parameters are: yarn evenness, thin places, tenacity and elongation, ends-down.Predicting results match the testing data well.

  18. Temporal association in asymmetric neural networks

    Science.gov (United States)

    Sompolinsky, H.; Kanter, I.

    1986-12-01

    A neural network model which is capable of recalling time sequences and cycles of patterns is introduced. In this model, some of the synaptic connections, Jij, between pairs of neurons are asymmetric (Jij≠Jji) and have slow dynamic response. The effects of thermal noise on the generated sequences are discussed. Simulation results demonstrating the performance of the network are presented. The model may be also useful in understanding the generation of rhythmic patterns in biological motor systems.

  19. Incremental construction of LSTM recurrent neural network

    OpenAIRE

    Ribeiro, Evandsa Sabrine Lopes-Lima; Alquézar Mancho, René

    2002-01-01

    Long Short--Term Memory (LSTM) is a recurrent neural network that uses structures called memory blocks to allow the net remember significant events distant in the past input sequence in order to solve long time lag tasks, where other RNN approaches fail. Throughout this work we have performed experiments using LSTM networks extended with growing abilities, which we call GLSTM. Four methods of training growing LSTM has been compared. These methods include cascade and ...

  20. Stability and Adaptation of Neural Networks

    Science.gov (United States)

    1990-11-02

    Feature discovery by competitive works.-~ IEEE Trans- Si’st.. Man. Cybern.. vol. SMC-13. pp. 815- learning.- Cogniive Science , vol. 9. pp. 75-112. 1985...include Electronic Engineering Times, the Los Angeles Times, Popular Science , the Economist, and Breakthroughs. As program chairman of the first...feedback neural networks.*’ Science . vol. 235. pp. 1226-1227. Mar. 6. 1987. networks.- submitted for publication. 141 G. A. Carpenter and S. Grossberg

  1. Neural networks of human nature and nurture

    Directory of Open Access Journals (Sweden)

    Daniel S. Levine

    2008-06-01

    Full Text Available Neural network methods have facilitated the unifi - cation of several unfortunate splits in psychology, including nature versus nurture. We review the contributions of this methodology and then discuss tentative network theories of caring behavior, of uncaring behavior, and of how the frontal lobes are involved in the choices between them. The implications of our theory are optimistic about the prospects of society to encourage the human potential for caring.

  2. Compressing Neural Networks with the Hashing Trick

    OpenAIRE

    Chen, Wenlin; Wilson, James T.; Tyree, Stephen; Weinberger, Kilian Q.; Chen, Yixin

    2015-01-01

    As deep nets are increasingly used in applications suited for mobile devices, a fundamental dilemma becomes apparent: the trend in deep learning is to grow models to absorb ever-increasing data set sizes; however mobile devices are designed with very little memory and cannot store such large models. We present a novel network architecture, HashedNets, that exploits inherent redundancy in neural networks to achieve drastic reductions in model sizes. HashedNets uses a low-cost hash function to ...

  3. Neural networks of human nature and nurture

    Directory of Open Access Journals (Sweden)

    Daniel S. Levine

    2009-11-01

    Full Text Available Neural network methods have facilitated the unification of several unfortunate splits in psychology, including nature versus nurture. We review the contributions of this methodology and then discuss tentative network theories of caring behavior, of uncaring behavior, and of how the frontal lobes are involved in the choices between them. The implications of our theory are optimistic about the prospects of society to encourage the human potential for caring.

  4. Auto-associative nanoelectronic neural network

    Energy Technology Data Exchange (ETDEWEB)

    Nogueira, C. P. S. M.; Guimarães, J. G. [Departamento de Engenharia Elétrica - Laboratório de Dispositivos e Circuito Integrado, Universidade de Brasília, CP 4386, CEP 70904-970 Brasília DF (Brazil)

    2014-05-15

    In this paper, an auto-associative neural network using single-electron tunneling (SET) devices is proposed and simulated at low temperature. The nanoelectronic auto-associative network is able to converge to a stable state, previously stored during training. The recognition of the pattern involves decreasing the energy of the input state until it achieves a point of local minimum energy, which corresponds to one of the stored patterns.

  5. Estimation of concrete compressive strength using artificial neural network

    OpenAIRE

    Kostić, Srđan; Vasović, Dejan

    2015-01-01

    In present paper, concrete compressive strength is evaluated using back propagation feed-forward artificial neural network. Training of neural network is performed using Levenberg-Marquardt learning algorithm for four architectures of artificial neural networks, one, three, eight and twelve nodes in a hidden layer in order to avoid the occurrence of overfitting. Training, validation and testing of neural network is conducted for 75 concrete samples with distinct w/c ratio and amount of superp...

  6. Analysis of Wideband Beamformers Designed with Artificial Neural Networks

    Science.gov (United States)

    1990-12-01

    TECHNICAL REPORT 0-90-1 ANALYSIS OF WIDEBAND BEAMFORMERS DESIGNED WITH ARTIFICIAL NEURAL NETWORKS by Cary Cox Instrumentation Services Division...included. A briel tutorial on beamformers and neural networks is also provided. 14. SUBJECT TERMS 15, NUMBER OF PAGES Artificial neural networks Fecdforwa:,l...Beamformers Designed with Artificial Neural Networks ". The study was conducted under the general supervision of Messrs. George P. Bonner, Chief

  7. Neural network method for solving elastoplastic finite element problems

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    A basic optimization principle of Artificial Neural Network-the Lagrange Programming Neural Network (LPNN) model for solving elastoplastic finite element problems is presented. The nonlinear problems of mechanics are represented as a neural network based optimization problem by adopting the nonlinear function as nerve cell transfer function. Finally, two simple elastoplastic problems are numerically simulated. LPNN optimization results for elastoplastic problem are found to be comparable to traditional Hopfield neural network optimization model.

  8. Combining logistic regression and neural networks to create predictive models.

    OpenAIRE

    Spackman, K. A.

    1992-01-01

    Neural networks are being used widely in medicine and other areas to create predictive models from data. The statistical method that most closely parallels neural networks is logistic regression. This paper outlines some ways in which neural networks and logistic regression are similar, shows how a small modification of logistic regression can be used in the training of neural network models, and illustrates the use of this modification for variable selection and predictive model building wit...

  9. Dynamic Object Identification with SOM-based neural networks

    Directory of Open Access Journals (Sweden)

    Aleksey Averkin

    2014-03-01

    Full Text Available In this article a number of neural networks based on self-organizing maps, that can be successfully used for dynamic object identification, is described. Unique SOM-based modular neural networks with vector quantized associative memory and recurrent self-organizing maps as modules are presented. The structured algorithms of learning and operation of such SOM-based neural networks are described in details, also some experimental results and comparison with some other neural networks are given.

  10. Remote Sensing Image Segmentation with Probabilistic Neural Networks

    Institute of Scientific and Technical Information of China (English)

    LIU Gang

    2005-01-01

    This paper focuses on the image segmentation with probabilistic neural networks (PNNs). Back propagation neural networks (BpNNs) and multi perceptron neural networks (MLPs) are also considered in this study. Especially, this paper investigates the implementation of PNNs in image segmentation and optimal processing of image segmentation with a PNN. The comparison between image segmentations with PNNs and with other neural networks is given. The experimental results show that PNNs can be successfully applied to image segmentation for good results.

  11. Optimizing neural network models: motivation and case studies

    OpenAIRE

    Harp, S A; T. Samad

    2012-01-01

    Practical successes have been achieved  with neural network models in a variety of domains, including energy-related industry. The large, complex design space presented by neural networks is only minimally explored in current practice. The satisfactory results that nevertheless have been obtained testify that neural networks are a robust modeling technology; at the same time, however, the lack of a systematic design approach implies that the best neural network models generally  rem...

  12. A hierarchical network modeling method for railway tunnels safety assessment

    Science.gov (United States)

    Zhou, Jin; Xu, Weixiang; Guo, Xin; Liu, Xumin

    2017-02-01

    Using network theory to model risk-related knowledge on accidents is regarded as potential very helpful in risk management. A large amount of defects detection data for railway tunnels is collected in autumn every year in China. It is extremely important to discover the regularities knowledge in database. In this paper, based on network theories and by using data mining techniques, a new method is proposed for mining risk-related regularities to support risk management in railway tunnel projects. A hierarchical network (HN) model which takes into account the tunnel structures, tunnel defects, potential failures and accidents is established. An improved Apriori algorithm is designed to rapidly and effectively mine correlations between tunnel structures and tunnel defects. Then an algorithm is presented in order to mine the risk-related regularities table (RRT) from the frequent patterns. At last, a safety assessment method is proposed by consideration of actual defects and possible risks of defects gained from the RRT. This method cannot only generate the quantitative risk results but also reveal the key defects and critical risks of defects. This paper is further development on accident causation network modeling methods which can provide guidance for specific maintenance measure.

  13. Hopfield Neural Network Approach to Clustering in Mobile Radio Networks

    Institute of Scientific and Technical Information of China (English)

    JiangYan; LiChengshu

    1995-01-01

    In this paper ,the Hopfield neural network(NN) algorithm is developed for selecting gateways in cluster linkage.The linked cluster(LC) architecture is assumed to achieve distributed network control in multihop radio networks throrgh the local controllers,called clusterheads and the nodes connecting these clusterheads are defined to be gateways.In Hopfield NN models ,the most critical issue being the determination of connection weights,we use the approach of Lagrange multipliers(LM) for its dynamic nature.

  14. Teaching a machine to see: unsupervised image segmentation and categorisation using growing neural gas and hierarchical clustering

    CERN Document Server

    Hocking, Alex; Davey, Neil; Sun, Yi

    2015-01-01

    We present a novel unsupervised learning approach to automatically segment and label images in astronomical surveys. Automation of this procedure will be essential as next-generation surveys enter the petabyte scale: data volumes will exceed the capability of even large crowd-sourced analyses. We demonstrate how a growing neural gas (GNG) can be used to encode the feature space of imaging data. When coupled with a technique called hierarchical clustering, imaging data can be automatically segmented and labelled by organising nodes in the GNG. The key distinction of unsupervised learning is that these labels need not be known prior to training, rather they are determined by the algorithm itself. Importantly, after training a network can be be presented with images it has never 'seen' before and provide consistent categorisation of features. As a proof-of-concept we demonstrate application on data from the Hubble Space Telescope Frontier Fields: images of clusters of galaxies containing a mixture of galaxy type...

  15. A Modified Algorithm for Feedforward Neural Networks

    Institute of Scientific and Technical Information of China (English)

    夏战国; 管红杰; 李政伟; 孟斌

    2002-01-01

    As a most popular learning algorithm for the feedforward neural networks, the classic BP algorithm has its many shortages. To overcome some of the shortages, a modified learning algorithm is proposed in the article. And the simulation result illustrate the modified algorithm is more effective and practicable.

  16. Convolutional Neural Networks for SAR Image Segmentation

    DEFF Research Database (Denmark)

    Malmgren-Hansen, David; Nobel-Jørgensen, Morten

    2015-01-01

    Segmentation of Synthetic Aperture Radar (SAR) images has several uses, but it is a difficult task due to a number of properties related to SAR images. In this article we show how Convolutional Neural Networks (CNNs) can easily be trained for SAR image segmentation with good results. Besides...

  17. Psychometric Measurement Models and Artificial Neural Networks

    Science.gov (United States)

    Sese, Albert; Palmer, Alfonso L.; Montano, Juan J.

    2004-01-01

    The study of measurement models in psychometrics by means of dimensionality reduction techniques such as Principal Components Analysis (PCA) is a very common practice. In recent times, an upsurge of interest in the study of artificial neural networks apt to computing a principal component extraction has been observed. Despite this interest, the…

  18. Applying Artificial Neural Networks for Face Recognition

    Directory of Open Access Journals (Sweden)

    Thai Hoang Le

    2011-01-01

    Full Text Available This paper introduces some novel models for all steps of a face recognition system. In the step of face detection, we propose a hybrid model combining AdaBoost and Artificial Neural Network (ABANN to solve the process efficiently. In the next step, labeled faces detected by ABANN will be aligned by Active Shape Model and Multi Layer Perceptron. In this alignment step, we propose a new 2D local texture model based on Multi Layer Perceptron. The classifier of the model significantly improves the accuracy and the robustness of local searching on faces with expression variation and ambiguous contours. In the feature extraction step, we describe a methodology for improving the efficiency by the association of two methods: geometric feature based method and Independent Component Analysis method. In the face matching step, we apply a model combining many Neural Networks for matching geometric features of human face. The model links many Neural Networks together, so we call it Multi Artificial Neural Network. MIT + CMU database is used for evaluating our proposed methods for face detection and alignment. Finally, the experimental results of all steps on CallTech database show the feasibility of our proposed model.

  19. Artificial neural networks in neutron dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.; Mercado, G.A.; Perales M, W.A.; Robles R, J.A. [Unidades Academicas de Estudios Nucleares, UAZ, A.P. 336, 98000 Zacatecas (Mexico); Gallego, E.; Lorente, A. [Depto. de Ingenieria Nuclear, Universidad Politecnica de Madrid, (Spain)

    2005-07-01

    An artificial neural network has been designed to obtain the neutron doses using only the Bonner spheres spectrometer's count rates. Ambient, personal and effective neutron doses were included. 187 neutron spectra were utilized to calculate the Bonner count rates and the neutron doses. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. Re-binned spectra, UTA4 response matrix and fluence-to-dose coefficients were used to calculate the count rates in Bonner spheres spectrometer and the doses. Count rates were used as input and the respective doses were used as output during neural network training. Training and testing was carried out in Mat lab environment. The artificial neural network performance was evaluated using the {chi}{sup 2}- test, where the original and calculated doses were compared. The use of Artificial Neural Networks in neutron dosimetry is an alternative procedure that overcomes the drawbacks associated in this ill-conditioned problem. (Author)

  20. Chaotic behavior of a layered neural network

    Energy Technology Data Exchange (ETDEWEB)

    Derrida, B.; Meir, R.

    1988-09-15

    We consider the evolution of configurations in a layered feed-forward neural network. Exact expressions for the evolution of the distance between two configurations are obtained in the thermodynamic limit. Our results show that the distance between two arbitrarily close configurations always increases, implying chaotic behavior, even in the phase of good retrieval.

  1. Visualization of neural networks using saliency maps

    DEFF Research Database (Denmark)

    Mørch, Niels J.S.; Kjems, Ulrik; Hansen, Lars Kai

    1995-01-01

    The saliency map is proposed as a new method for understanding and visualizing the nonlinearities embedded in feedforward neural networks, with emphasis on the ill-posed case, where the dimensionality of the input-field by far exceeds the number of examples. Several levels of approximations...

  2. Towards semen quality assessment using neural networks

    DEFF Research Database (Denmark)

    Linneberg, Christian; Salamon, P.; Svarer, C.

    1994-01-01

    The paper presents the methodology and results from a neural net based classification of human sperm head morphology. The methodology uses a preprocessing scheme in which invariant Fourier descriptors are lumped into “energy” bands. The resulting networks are pruned using optimal brain damage...

  3. Neural Networks for protein Structure Prediction

    DEFF Research Database (Denmark)

    Bohr, Henrik

    1998-01-01

    This is a review about neural network applications in bioinformatics. Especially the applications to protein structure prediction, e.g. prediction of secondary structures, prediction of surface structure, fold class recognition and prediction of the 3-dimensional structure of protein backbones...

  4. Nonlinear Time Series Analysis via Neural Networks

    Science.gov (United States)

    Volná, Eva; Janošek, Michal; Kocian, Václav; Kotyrba, Martin

    This article deals with a time series analysis based on neural networks in order to make an effective forex market [Moore and Roche, J. Int. Econ. 58, 387-411 (2002)] pattern recognition. Our goal is to find and recognize important patterns which repeatedly appear in the market history to adapt our trading system behaviour based on them.

  5. Epileptiform spike detection via convolutional neural networks

    DEFF Research Database (Denmark)

    Johansen, Alexander Rosenberg; Jin, Jing; Maszczyk, Tomasz

    2016-01-01

    The EEG of epileptic patients often contains sharp waveforms called "spikes", occurring between seizures. Detecting such spikes is crucial for diagnosing epilepsy. In this paper, we develop a convolutional neural network (CNN) for detecting spikes in EEG of epileptic patients in an automated fash...

  6. Learning chaotic attractors by neural networks

    NARCIS (Netherlands)

    Bakker, R; Schouten, JC; Giles, CL; Takens, F; van den Bleek, CM

    2000-01-01

    An algorithm is introduced that trains a neural network to identify chaotic dynamics from a single measured time series. During training, the algorithm learns to short-term predict the time series. At the same time a criterion, developed by Diks, van Zwet, Takens, and de Goede (1996) is monitored th

  7. Neural Networks for protein Structure Prediction

    DEFF Research Database (Denmark)

    Bohr, Henrik

    1998-01-01

    This is a review about neural network applications in bioinformatics. Especially the applications to protein structure prediction, e.g. prediction of secondary structures, prediction of surface structure, fold class recognition and prediction of the 3-dimensional structure of protein backbones...

  8. Binaural Sound Localization Using Neural Networks

    Science.gov (United States)

    1991-12-12

    by Brennan, involved the implementation of a neural network to model the ability of a bat to discriminate between a mealworm and an inedible object...locate, identify and capture airborne prey (6:2). The sonar returns were collected from the mealworms , spheres and disks at various rotations (90 to

  9. Continuous Online Sequence Learning with an Unsupervised Neural Network Model.

    Science.gov (United States)

    Cui, Yuwei; Ahmad, Subutar; Hawkins, Jeff

    2016-09-14

    The ability to recognize and predict temporal sequences of sensory inputs is vital for survival in natural environments. Based on many known properties of cortical neurons, hierarchical temporal memory (HTM) sequence memory recently has been proposed as a theoretical framework for sequence learning in the cortex. In this letter, we analyze properties of HTM sequence memory and apply it to sequence learning and prediction problems with streaming data. We show the model is able to continuously learn a large number of variableorder temporal sequences using an unsupervised Hebbian-like learning rule. The sparse temporal codes formed by the model can robustly handle branching temporal sequences by maintaining multiple predictions until there is sufficient disambiguating evidence. We compare the HTM sequence memory with other sequence learning algorithms, including statistical methods: autoregressive integrated moving average; feedforward neural networks-time delay neural network and online sequential extreme learning machine; and recurrent neural networks-long short-term memory and echo-state networks on sequence prediction problems with both artificial and real-world data. The HTM model achieves comparable accuracy to other state-of-the-art algorithms. The model also exhibits properties that are critical for sequence learning, including continuous online learning, the ability to handle multiple predictions and branching sequences with high-order statistics, robustness to sensor noise and fault tolerance, and good performance without task-specific hyperparameter tuning. Therefore, the HTM sequence memory not only advances our understanding of how the brain may solve the sequence learning problem but is also applicable to real-world sequence learning problems from continuous data streams.

  10. Brain tumor grading based on Neural Networks and Convolutional Neural Networks.

    Science.gov (United States)

    Yuehao Pan; Weimin Huang; Zhiping Lin; Wanzheng Zhu; Jiayin Zhou; Wong, Jocelyn; Zhongxiang Ding

    2015-08-01

    This paper studies brain tumor grading using multiphase MRI images and compares the results with various configurations of deep learning structure and baseline Neural Networks. The MRI images are used directly into the learning machine, with some combination operations between multiphase MRIs. Compared to other researches, which involve additional effort to design and choose feature sets, the approach used in this paper leverages the learning capability of deep learning machine. We present the grading performance on the testing data measured by the sensitivity and specificity. The results show a maximum improvement of 18% on grading performance of Convolutional Neural Networks based on sensitivity and specificity compared to Neural Networks. We also visualize the kernels trained in different layers and display some self-learned features obtained from Convolutional Neural Networks.

  11. Neural networks in economic modelling : An empirical study

    NARCIS (Netherlands)

    Verkooijen, W.J.H.

    1996-01-01

    This dissertation addresses the statistical aspects of neural networks and their usability for solving problems in economics and finance. Neural networks are discussed in a framework of modelling which is generally accepted in econometrics. Within this framework a neural network is regarded as a sta

  12. Extracting Knowledge from Supervised Neural Networks in Image Procsssing

    NARCIS (Netherlands)

    Zwaag, van der Berend Jan; Slump, Kees; Spaanenburg, Lambert; Jain, R.; Abraham, A.; Faucher, C.; Zwaag, van der B.J.

    2003-01-01

    Despite their success-story, artificial neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a ¿magic tool¿ but possibly even more as a my

  13. Analysis of Neural Networks in Terms of Domain Functions

    NARCIS (Netherlands)

    Zwaag, van der Berend Jan; Slump, Cees; Spaanenburg, Lambert

    2002-01-01

    Despite their success-story, artificial neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a "magic tool" but possibly even more as a my

  14. Recognition of Continuous Digits by Quantum Neural Networks

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    This paper describes a new kind of neural network-Quantum Neural Network (QNN) and its application to recognition of continuous digits. QNN combines the advantages of neural modeling and fuzzy theoretic principles. Experiment results show that more than 15 percent error reduction is achieved on a speaker-independent continuous digits recognition task compared with BP networks.

  15. SOLVING INVERSE KINEMATICS OF REDUNDANT MANIPULATOR BASED ON NEURAL NETWORK

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    For the redundant manipulators, neural network is used to tackle the velocity inverse kinematics of robot manipulators. The neural networks utilized are multi-layered perceptions with a back-propagation training algorithm. The weight table is used to save the weights solving the inverse kinematics based on the different optimization performance criteria. Simulations verify the effectiveness of using neural network.

  16. A Fuzzy Neural Network for Fault Pattern Recognition

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    This paper combines fuzzy set theory with AR T neural network, and demonstrates some important properties of the fuzzy ART neural network algorithm. The results from application on a ball bearing diagnosis indicate that a fuzzy ART neural network has an effect of fast stable recognition for fuzzy patterns.

  17. A Direct Feedback Control Based on Fuzzy Recurrent Neural Network

    Institute of Scientific and Technical Information of China (English)

    李明; 马小平

    2002-01-01

    A direct feedback control system based on fuzzy-recurrent neural network is proposed, and a method of training weights of fuzzy-recurrent neural network was designed by applying modified contract mapping genetic algorithm. Computer simul ation results indicate that fuzzy-recurrent neural network controller has perfect dynamic and static performances .

  18. [Application of artificial neural networks in infectious diseases].

    Science.gov (United States)

    Xu, Jun-fang; Zhou, Xiao-nong

    2011-02-28

    With the development of information technology, artificial neural networks has been applied to many research fields. Due to the special features such as nonlinearity, self-adaptation, and parallel processing, artificial neural networks are applied in medicine and biology. This review summarizes the application of artificial neural networks in the relative factors, prediction and diagnosis of infectious diseases in recent years.

  19. Prediction based chaos control via a new neural network

    Energy Technology Data Exchange (ETDEWEB)

    Shen Liqun [School of Electrical Engineering and Automation, Harbin Institute of Technology, Harbin 150001 (China)], E-mail: liqunshen@gmail.com; Wang Mao [Space Control and Inertia Technology Research Center, Harbin Institute of Technology, Harbin 150001 (China); Liu Wanyu [School of Electrical Engineering and Automation, Harbin Institute of Technology, Harbin 150001 (China); Sun Guanghui [Space Control and Inertia Technology Research Center, Harbin Institute of Technology, Harbin 150001 (China)

    2008-11-17

    In this Letter, a new chaos control scheme based on chaos prediction is proposed. To perform chaos prediction, a new neural network architecture for complex nonlinear approximation is proposed. And the difficulty in building and training the neural network is also reduced. Simulation results of Logistic map and Lorenz system show the effectiveness of the proposed chaos control scheme and the proposed neural network.

  20. Hierarchical Interference Mitigation for Massive MIMO Cellular Networks

    Science.gov (United States)

    Liu, An; Lau, Vincent

    2014-09-01

    We propose a hierarchical interference mitigation scheme for massive MIMO cellular networks. The MIMO precoder at each base station (BS) is partitioned into an inner precoder and an outer precoder. The inner precoder controls the intra-cell interference and is adaptive to local channel state information (CSI) at each BS (CSIT). The outer precoder controls the inter-cell interference and is adaptive to channel statistics. Such hierarchical precoding structure reduces the number of pilot symbols required for CSI estimation in massive MIMO downlink and is robust to the backhaul latency. We study joint optimization of the outer precoders, the user selection, and the power allocation to maximize a general concave utility which has no closed-form expression. We first apply random matrix theory to obtain an approximated problem with closed-form objective. We show that the solution of the approximated problem is asymptotically optimal with respect to the original problem as the number of antennas per BS grows large. Then using the hidden convexity of the problem, we propose an iterative algorithm to find the optimal solution for the approximated problem. We also obtain a low complexity algorithm with provable convergence. Simulations show that the proposed design has significant gain over various state-of-the-art baselines.

  1. From Designing A Single Neural Network to Designing Neural Network Ensembles

    Institute of Scientific and Technical Information of China (English)

    Liu Yong; Zou Xiu-fer

    2003-01-01

    This paper introduces supervised learning model,and surveys related research work. The paper is organised as follows. A supervised learning model is firstly described. The bias variance trade-off is then discussed for the supervised learning model. Based on the bias variance trade-off, both the single neural network approaches and the neural network en semble approaches are overviewed, and problems with the existing approaches are indicated. Finally, the paper concludes with specifying potential future research directions.

  2. Deep Neural Network for Structural Prediction and Lane Detection in Traffic Scene.

    Science.gov (United States)

    Li, Jun; Mei, Xue; Prokhorov, Danil; Tao, Dacheng

    2017-03-01

    Hierarchical neural networks have been shown to be effective in learning representative image features and recognizing object classes. However, most existing networks combine the low/middle level cues for classification without accounting for any spatial structures. For applications such as understanding a scene, how the visual cues are spatially distributed in an image becomes essential for successful analysis. This paper extends the framework of deep neural networks by accounting for the structural cues in the visual signals. In particular, two kinds of neural networks have been proposed. First, we develop a multitask deep convolutional network, which simultaneously detects the presence of the target and the geometric attributes (location and orientation) of the target with respect to the region of interest. Second, a recurrent neuron layer is adopted for structured visual detection. The recurrent neurons can deal with the spatial distribution of visible cues belonging to an object whose shape or structure is difficult to explicitly define. Both the networks are demonstrated by the practical task of detecting lane boundaries in traffic scenes. The multitask convolutional neural network provides auxiliary geometric information to help the subsequent modeling of the given lane structures. The recurrent neural network automatically detects lane boundaries, including those areas containing no marks, without any explicit prior knowledge or secondary modeling.

  3. A Fuzzy Quantum Neural Network and Its Application in Pattern Recognition

    Institute of Scientific and Technical Information of China (English)

    MIAOFuyou; XIONGYan; CHENHuanhuan; WANGXingfu

    2005-01-01

    This paper proposes a fuzzy quantum neural network model combining quantum neural network and fuzzy logic, which applies the fuzzy logic to design the collapse rules of the quantum neural network, and solves the character recognition problem. Theoretical analysis and experimental results show that fuzzy quantum neural network improves recognizing veracity than the traditional neural network and quantum neural network.

  4. Optical implementation of neural networks

    Science.gov (United States)

    Yu, Francis T. S.; Guo, Ruyan

    2002-12-01

    An adaptive optical neuro-computing (ONC) using inexpensive pocket size liquid crystal televisions (LCTVs) had been developed by the graduate students in the Electro-Optics Laboratory at The Pennsylvania State University. Although this neuro-computing has only 8×8=64 neurons, it can be easily extended to 16×20=320 neurons. The major advantages of this LCTV architecture as compared with other reported ONCs, are low cost and the flexibility to operate. To test the performance, several neural net models are used. These models are Interpattern Association, Hetero-association and unsupervised learning algorithms. The system design considerations and experimental demonstrations are also included.

  5. Distributed Plume Source Localization Using Hierarchical Sensor Networks

    Institute of Scientific and Technical Information of China (English)

    KUANG Xing-hong; LIU Yu-qing; WU Yan-xiang; SHAO Hui-he

    2009-01-01

    A hierarchical wireless sensor networks (WSN) was proposed to estimate the plume source location. Such WSN can be of tremendous help to emergency personnel trying to protect people from terrorist attacks or responding to an accident. The entire surveillant field is divided into several small sub-regions. In each sub-region, the localization algorithm based on the improved particle filter (IPF) was performed to estimate the location. Some improved methods such as weighted centroid, residual resampling were introduced to the IPF algorithm to increase the localization performance. This distributed estimation method elirninates many drawbacks inherent with the traditional centralized optimization method. Simulation results show that localization algorithm is efficient far estimating the plume source location.

  6. Hierarchical self-organization of cytoskeletal active networks

    CERN Document Server

    Gordon, Daniel; Keasar, Chen; Farago, Oded

    2012-01-01

    The structural reorganization of the actin cytoskeleton is facilitated through the action of motor proteins that crosslink the actin filaments and transport them relative to each other. Here, we present a combined experimental-computational study that probes the dynamic evolution of mixtures of actin filaments and clusters of myosin motors. While on small spatial and temporal scales the system behaves in a very noisy manner, on larger scales it evolves into several well distinct patterns such as bundles, asters, and networks. These patterns are characterized by junctions with high connectivity, whose formation is possible due to the organization of the motors in "oligoclusters" (intermediate-size aggregates). The simulations reveal that the self-organization process proceeds through a series of hierarchical steps, starting from local microscopic moves and ranging up to the macroscopic large scales where the steady-state structures are formed. Our results shed light into the mechanisms involved in processes li...

  7. Learning Probabilistic Hierarchical Task Networks to Capture User Preferences

    CERN Document Server

    Li, Nan; Kambhampati, Subbarao; Yoon, Sungwook

    2010-01-01

    We propose automatically learning probabilistic Hierarchical Task Networks (pHTNs) in order to capture a user's preferences on plans, by observing only the user's behavior. HTNs are a common choice of representation for a variety of purposes in planning, including work on learning in planning. Our contributions are (a) learning structure and (b) representing preferences. In contrast, prior work employing HTNs considers learning method preconditions (instead of structure) and representing domain physics or search control knowledge (rather than preferences). Initially we will assume that the observed distribution of plans is an accurate representation of user preference, and then generalize to the situation where feasibility constraints frequently prevent the execution of preferred plans. In order to learn a distribution on plans we adapt an Expectation-Maximization (EM) technique from the discipline of (probabilistic) grammar induction, taking the perspective of task reductions as productions in a context-free...

  8. Distribution network planning algorithm based on Hopfield neural network

    Institute of Scientific and Technical Information of China (English)

    GAO Wei-xin; LUO Xian-jue

    2005-01-01

    This paper presents a new algorithm based on Hopfield neural network to find the optimal solution for an electric distribution network. This algorithm transforms the distribution power network-planning problem into a directed graph-planning problem. The Hopfield neural network is designed to decide the in-degree of each node and is in combined application with an energy function. The new algorithm doesn't need to code city streets and normalize data, so the program is easier to be realized. A case study applying the method to a district of 29 street proved that an optimal solution for the planning of such a power system could be obtained by only 26 iterations. The energy function and algorithm developed in this work have the following advantages over many existing algorithms for electric distribution network planning: fast convergence and unnecessary to code all possible lines.

  9. Neural networks in windprofiler data processing

    Science.gov (United States)

    Weber, H.; Richner, H.; Kretzschmar, R.; Ruffieux, D.

    2003-04-01

    Wind profilers are basically Doppler radars yielding 3-dimensional wind profiles that are deduced from the Doppler shift caused by turbulent elements in the atmosphere. These signals can be contaminated by other airborne elements such as birds or hydrometeors. Using a feed-forward neural network with one hidden layer and one output unit, birds and hydrometeors can be successfully identified in non-averaged single spectra; theses are subsequently removed in the wind computation. An infrared camera was used to identify birds in one of the beams of the wind profiler. After training the network with about 6000 contaminated data sets, it was able to identify contaminated data in a test data set with a reliability of 96 percent. The assumption was made that the neural network parameters obtained in the beam for which bird data was collected can be transferred to the other beams (at least three beams are needed for computing wind vectors). Comparing the evolution of a wind field with and without the neural network shows a significant improvement of wind data quality. Current work concentrates on training the network also for hydrometeors. It is hoped that the instrument's capability can thus be expanded to measure not only correct winds, but also observe bird migration, estimate precipitation and -- by combining precipitation information with vertical velocity measurement -- the monitoring of the height of the melting layer.

  10. Color control of printers by neural networks

    Science.gov (United States)

    Tominaga, Shoji

    1998-07-01

    A method is proposed for solving the mapping problem from the 3D color space to the 4D CMYK space of printer ink signals by means of a neural network. The CIE-L*a*b* color system is used as the device-independent color space. The color reproduction problem is considered as the problem of controlling an unknown static system with four inputs and three outputs. A controller determines the CMYK signals necessary to produce the desired L*a*b* values with a given printer. Our solution method for this control problem is based on a two-phase procedure which eliminates the need for UCR and GCR. The first phase determines a neural network as a model of the given printer, and the second phase determines the combined neural network system by combining the printer model and the controller in such a way that it represents an identity mapping in the L*a*b* color space. Then the network of the controller part realizes the mapping from the L*a*b* space to the CMYK space. Practical algorithms are presented in the form of multilayer feedforward networks. The feasibility of the proposed method is shown in experiments using a dye sublimation printer and an ink jet printer.

  11. Computationally Efficient Neural Network Intrusion Security Awareness

    Energy Technology Data Exchange (ETDEWEB)

    Todd Vollmer; Milos Manic

    2009-08-01

    An enhanced version of an algorithm to provide anomaly based intrusion detection alerts for cyber security state awareness is detailed. A unique aspect is the training of an error back-propagation neural network with intrusion detection rule features to provide a recognition basis. Network packet details are subsequently provided to the trained network to produce a classification. This leverages rule knowledge sets to produce classifications for anomaly based systems. Several test cases executed on ICMP protocol revealed a 60% identification rate of true positives. This rate matched the previous work, but 70% less memory was used and the run time was reduced to less than 1 second from 37 seconds.

  12. Reconstruction of periodic signals using neural networks

    Directory of Open Access Journals (Sweden)

    José Danilo Rairán Antolines

    2014-01-01

    Full Text Available In this paper, we reconstruct a periodic signal by using two neural networks. The first network is trained to approximate the period of a signal, and the second network estimates the corresponding coefficients of the signal's Fourier expansion. The reconstruction strategy consists in minimizing the mean-square error via backpro-pagation algorithms over a single neuron with a sine transfer function. Additionally, this paper presents mathematical proof about the quality of the approximation as well as a first modification of the algorithm, which requires less data to reach the same estimation; thus making the algorithm suitable for real-time implementations.

  13. Computationally Efficient Neural Network Intrusion Security Awareness

    Energy Technology Data Exchange (ETDEWEB)

    Todd Vollmer; Milos Manic

    2009-08-01

    An enhanced version of an algorithm to provide anomaly based intrusion detection alerts for cyber security state awareness is detailed. A unique aspect is the training of an error back-propagation neural network with intrusion detection rule features to provide a recognition basis. Network packet details are subsequently provided to the trained network to produce a classification. This leverages rule knowledge sets to produce classifications for anomaly based systems. Several test cases executed on ICMP protocol revealed a 60% identification rate of true positives. This rate matched the previous work, but 70% less memory was used and the run time was reduced to less than 1 second from 37 seconds.

  14. The Stellar parametrization using Artificial Neural Network

    CERN Document Server

    Giridhar, Sunetra; Kunder, Andrea; Muneer, S; Kumar, G Selva

    2012-01-01

    An update on recent methods for automated stellar parametrization is given. We present preliminary results of the ongoing program for rapid parametrization of field stars using medium resolution spectra obtained using Vainu Bappu Telescope at VBO, Kavalur, India. We have used Artificial Neural Network for estimating temperature, gravity, metallicity and absolute magnitude of the field stars. The network for each parameter is trained independently using a large number of calibrating stars. The trained network is used for estimating atmospheric parameters of unexplored field stars.

  15. Neural networks: Application to medical imaging

    Science.gov (United States)

    Clarke, Laurence P.

    1994-01-01

    The research mission is the development of computer assisted diagnostic (CAD) methods for improved diagnosis of medical images including digital x-ray sensors and tomographic imaging modalities. The CAD algorithms include advanced methods for adaptive nonlinear filters for image noise suppression, hybrid wavelet methods for feature segmentation and enhancement, and high convergence neural networks for feature detection and VLSI implementation of neural networks for real time analysis. Other missions include (1) implementation of CAD methods on hospital based picture archiving computer systems (PACS) and information networks for central and remote diagnosis and (2) collaboration with defense and medical industry, NASA, and federal laboratories in the area of dual use technology conversion from defense or aerospace to medicine.

  16. a Heterosynaptic Learning Rule for Neural Networks

    Science.gov (United States)

    Emmert-Streib, Frank

    In this article we introduce a novel stochastic Hebb-like learning rule for neural networks that is neurobiologically motivated. This learning rule combines features of unsupervised (Hebbian) and supervised (reinforcement) learning and is stochastic with respect to the selection of the time points when a synapse is modified. Moreover, the learning rule does not only affect the synapse between pre- and postsynaptic neuron, which is called homosynaptic plasticity, but effects also further remote synapses of the pre- and postsynaptic neuron. This more complex form of synaptic plasticity has recently come under investigations in neurobiology and is called heterosynaptic plasticity. We demonstrate that this learning rule is useful in training neural networks by learning parity functions including the exclusive-or (XOR) mapping in a multilayer feed-forward network. We find, that our stochastic learning rule works well, even in the presence of noise. Importantly, the mean learning time increases with the number of patterns to be learned polynomially, indicating efficient learning.

  17. Neural network for sonogram gap filling

    DEFF Research Database (Denmark)

    Klebæk, Henrik; Jensen, Jørgen Arendt; Hansen, Lars Kai

    1995-01-01

    . The neural network is trained on part of the data and the network is pruned by the optimal brain damage procedure in order to reduce the number of parameters in the network, and thereby reduce the risk of overfitting. The neural predictor is compared to using a linear filter for the mean and variance time......In duplex imaging both an anatomical B-mode image and a sonogram are acquired, and the time for data acquisition is divided between the two images. This gives problems when rapid B-mode image display is needed, since there is not time for measuring the velocity data. Gaps then appear...... in the sonogram and in the audio signal, rendering the audio signal useless, thus making diagnosis difficult. The current goal for ultrasound scanners is to maintain a high refresh rate for the B-mode image and at the same time attain a high maximum velocity in the sonogram display. This precludes the intermixing...

  18. Fuzzy logic and neural network technologies

    Science.gov (United States)

    Villarreal, James A.; Lea, Robert N.; Savely, Robert T.

    1992-01-01

    Applications of fuzzy logic technologies in NASA projects are reviewed to examine their advantages in the development of neural networks for aerospace and commercial expert systems and control. Examples of fuzzy-logic applications include a 6-DOF spacecraft controller, collision-avoidance systems, and reinforcement-learning techniques. The commercial applications examined include a fuzzy autofocusing system, an air conditioning system, and an automobile transmission application. The practical use of fuzzy logic is set in the theoretical context of artificial neural systems (ANSs) to give the background for an overview of ANS research programs at NASA. The research and application programs include the Network Execution and Training Simulator and faster training algorithms such as the Difference Optimized Training Scheme. The networks are well suited for pattern-recognition applications such as predicting sunspots, controlling posture maintenance, and conducting adaptive diagnoses.

  19. Design of Robust Neural Network Classifiers

    DEFF Research Database (Denmark)

    Larsen, Jan; Andersen, Lars Nonboe; Hintz-Madsen, Mads

    1998-01-01

    a modified likelihood function which incorporates the potential risk of outliers in the data. This leads to the introduction of a new parameter, the outlier probability. Designing the neural classifier involves optimization of network weights as well as outlier probability and regularization parameters. We...... suggest to adapt the outlier probability and regularisation parameters by minimizing the error on a validation set, and a simple gradient descent scheme is derived. In addition, the framework allows for constructing a simple outlier detector. Experiments with artificial data demonstrate the potential......This paper addresses a new framework for designing robust neural network classifiers. The network is optimized using the maximum a posteriori technique, i.e., the cost function is the sum of the log-likelihood and a regularization term (prior). In order to perform robust classification, we present...

  20. High-Performance Neural Networks for Visual Object Classification

    CERN Document Server

    Cireşan, Dan C; Masci, Jonathan; Gambardella, Luca M; Schmidhuber, Jürgen

    2011-01-01

    We present a fast, fully parameterizable GPU implementation of Convolutional Neural Network variants. Our feature extractors are neither carefully designed nor pre-wired, but rather learned in a supervised way. Our deep hierarchical architectures achieve the best published results on benchmarks for object classification (NORB, CIFAR10) and handwritten digit recognition (MNIST), with error rates of 2.53%, 19.51%, 0.35%, respectively. Deep nets trained by simple back-propagation perform better than more shallow ones. Learning is surprisingly rapid. NORB is completely trained within five epochs. Test error rates on MNIST drop to 2.42%, 0.97% and 0.48% after 1, 3 and 17 epochs, respectively.

  1. The loading problem for recursive neural networks.

    Science.gov (United States)

    Gori, Marco; Sperduti, Alessandro

    2005-10-01

    The present work deals with one of the major and not yet completely understood topics of supervised connectionist models. Namely, it investigates the relationships between the difficulty of a given learning task and the chosen neural network architecture. These relationships have been investigated and nicely established for some interesting problems in the case of neural networks used for processing vectors and sequences, but only a few studies have dealt with loading problems involving graphical inputs. In this paper, we present sufficient conditions which guarantee the absence of local minima of the error function in the case of learning directed acyclic graphs with recursive neural networks. We introduce topological indices which can be directly calculated from the given training set and that allows us to design the neural architecture with local minima free error function. In particular, we conceive a reduction algorithm that involves both the information attached to the nodes and the topology, which enlarges significantly the class of the problems with unimodal error function previously proposed in the literature.

  2. Inference and contradictory analysis for binary neural networks

    Institute of Scientific and Technical Information of China (English)

    郭宝龙; 郭雷

    1996-01-01

    A weak-inference theory and a contradictory analysis for binary neural networks (BNNs).are presented.The analysis indicates that the essential reason why a neural network is changing its slates is the existence of superior contradiction inside the network,and that the process by which a neural network seeks a solution corresponds to eliminating the superior contradiction.Different from general constraint satisfaction networks,the solutions found by BNNs may contain inferior contradiction but not superior contradiction.

  3. Clustering in mobile ad hoc network based on neural network

    Institute of Scientific and Technical Information of China (English)

    CHEN Ai-bin; CAI Zi-xing; HU De-wen

    2006-01-01

    An on-demand distributed clustering algorithm based on neural network was proposed. The system parameters and the combined weight for each node were computed, and cluster-heads were chosen using the weighted clustering algorithm, then a training set was created and a neural network was trained. In this algorithm, several system parameters were taken into account, such as the ideal node-degree, the transmission power, the mobility and the battery power of the nodes. The algorithm can be used directly to test whether a node is a cluster-head or not. Moreover, the clusters recreation can be speeded up.

  4. Pruning Neural Networks with Distribution Estimation Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Cantu-Paz, E

    2003-01-15

    This paper describes the application of four evolutionary algorithms to the pruning of neural networks used in classification problems. Besides of a simple genetic algorithm (GA), the paper considers three distribution estimation algorithms (DEAs): a compact GA, an extended compact GA, and the Bayesian Optimization Algorithm. The objective is to determine if the DEAs present advantages over the simple GA in terms of accuracy or speed in this problem. The experiments used a feed forward neural network trained with standard back propagation and public-domain and artificial data sets. The pruned networks seemed to have better or equal accuracy than the original fully-connected networks. Only in a few cases, pruning resulted in less accurate networks. We found few differences in the accuracy of the networks pruned by the four EAs, but found important differences in the execution time. The results suggest that a simple GA with a small population might be the best algorithm for pruning networks on the data sets we tested.

  5. Phase Diagram of Spiking Neural Networks

    Directory of Open Access Journals (Sweden)

    Hamed eSeyed-Allaei

    2015-03-01

    Full Text Available In computer simulations of spiking neural networks, often it is assumed that every two neurons of the network are connected by a probablilty of 2%, 20% of neurons are inhibitory and 80% are excitatory. These common values are based on experiments, observations. but here, I take a different perspective, inspired by evolution. I simulate many networks, each with a different set of parameters, and then I try to figure out what makes the common values desirable by nature. Networks which are configured according to the common values, have the best dynamic range in response to an impulse and their dynamic range is more robust in respect to synaptic weights. In fact, evolution has favored networks of best dynamic range. I present a phase diagram that shows the dynamic ranges of different networks of different parameteres. This phase diagram gives an insight into the space of parameters -- excitatory to inhibitory ratio, sparseness of connections and synaptic weights. It may serve as a guideline to decide about the values of parameters in a simulation of spiking neural network.

  6. Gait Recognition Based on Convolutional Neural Networks

    Science.gov (United States)

    Sokolova, A.; Konushin, A.

    2017-05-01

    In this work we investigate the problem of people recognition by their gait. For this task, we implement deep learning approach using the optical flow as the main source of motion information and combine neural feature extraction with the additional embedding of descriptors for representation improvement. In order to find the best heuristics, we compare several deep neural network architectures, learning and classification strategies. The experiments were made on two popular datasets for gait recognition, so we investigate their advantages and disadvantages and the transferability of considered methods.

  7. Fuzzy logic and neural networks basic concepts & application

    CERN Document Server

    Alavala, Chennakesava R

    2008-01-01

    About the Book: The primary purpose of this book is to provide the student with a comprehensive knowledge of basic concepts of fuzzy logic and neural networks. The hybridization of fuzzy logic and neural networks is also included. No previous knowledge of fuzzy logic and neural networks is required. Fuzzy logic and neural networks have been discussed in detail through illustrative examples, methods and generic applications. Extensive and carefully selected references is an invaluable resource for further study of fuzzy logic and neural networks. Each chapter is followed by a question bank

  8. Cancer classification based on gene expression using neural networks.

    Science.gov (United States)

    Hu, H P; Niu, Z J; Bai, Y P; Tan, X H

    2015-12-21

    Based on gene expression, we have classified 53 colon cancer patients with UICC II into two groups: relapse and no relapse. Samples were taken from each patient, and gene information was extracted. Of the 53 samples examined, 500 genes were considered proper through analyses by S-Kohonen, BP, and SVM neural networks. Classification accuracy obtained by S-Kohonen neural network reaches 91%, which was more accurate than classification by BP and SVM neural networks. The results show that S-Kohonen neural network is more plausible for classification and has a certain feasibility and validity as compared with BP and SVM neural networks.

  9. Discover & eXplore Neural Network (DXNN) Platform, a Modular TWEANN

    CERN Document Server

    Sher, Gene I

    2010-01-01

    In this paper I present a novel type of Topology and Weight Evolving Artificial Neural Network (TWEANN) system called Discover & eXplore Neural Network (DXNN) Platform. DXNN utilizes a modular and hierarchical topology which promotes highly scalable and dynamically granular systems to evolve. Among the novel features discussed in this paper is a simple and database friendly encoding for hierarchical/modular NNs, a new selection method aimed at producing highly compact and fit individuals within the population, and a new training phase referred to as "Tuning Phase" which is aimed at removing the need for speciation algorithms. Mutation operators aimed at improving diversity, expandability, and capabilities of the DXNN through a built in feature selection method that allows for the evolved system to expand, discover and explore new sensors and actuators is also covered. Finally, DXNN platform is then compared to other state of the art TWEANNs on a control task to demonstrate its ability to produce highly co...

  10. Functional expansion representations of artificial neural networks

    Science.gov (United States)

    Gray, W. Steven

    1992-01-01

    In the past few years, significant interest has developed in using artificial neural networks to model and control nonlinear dynamical systems. While there exists many proposed schemes for accomplishing this and a wealth of supporting empirical results, most approaches to date tend to be ad hoc in nature and rely mainly on heuristic justifications. The purpose of this project was to further develop some analytical tools for representing nonlinear discrete-time input-output systems, which when applied to neural networks would give insight on architecture selection, pruning strategies, and learning algorithms. A long term goal is to determine in what sense, if any, a neural network can be used as a universal approximator for nonliner input-output maps with memory (i.e., realized by a dynamical system). This property is well known for the case of static or memoryless input-output maps. The general architecture under consideration in this project was a single-input, single-output recurrent feedforward network.

  11. Convolutional Neural Network Based dem Super Resolution

    Science.gov (United States)

    Chen, Zixuan; Wang, Xuewen; Xu, Zekai; Hou, Wenguang

    2016-06-01

    DEM super resolution is proposed in our previous publication to improve the resolution for a DEM on basis of some learning examples. Meanwhile, the nonlocal algorithm is introduced to deal with it and lots of experiments show that the strategy is feasible. In our publication, the learning examples are defined as the partial original DEM and their related high measurements due to this way can avoid the incompatibility between the data to be processed and the learning examples. To further extent the applications of this new strategy, the learning examples should be diverse and easy to obtain. Yet, it may cause the problem of incompatibility and unrobustness. To overcome it, we intend to investigate a convolutional neural network based method. The input of the convolutional neural network is a low resolution DEM and the output is expected to be its high resolution one. A three layers model will be adopted. The first layer is used to detect some features from the input, the second integrates the detected features to some compressed ones and the final step transforms the compressed features as a new DEM. According to this designed structure, some learning DEMs will be taken to train it. Specifically, the designed network will be optimized by minimizing the error of the output and its expected high resolution DEM. In practical applications, a testing DEM will be input to the convolutional neural network and a super resolution will be obtained. Many experiments show that the CNN based method can obtain better reconstructions than many classic interpolation methods.

  12. Toward implementation of artificial neural networks that "really work".

    Science.gov (United States)

    Leon, M. A.; Keller, J.

    1997-01-01

    Artificial neural networks are established analytical methods in bio-medical research. They have repeatedly outperformed traditional tools for pattern recognition and clinical outcome prediction while assuring continued adaptation and learning. However, successful experimental neural networks systems seldom reach a production state. That is, they are not incorporated into clinical information systems. It could be speculated that neural networks simply must undergo a lengthy acceptance process before they become part of the day to day operations of health care systems. However, our experience trying to incorporate experimental neural networks into information systems lead us to believe that there are technical and operational barriers that greatly difficult neural network implementation. A solution for these problems may be the delineation of policies and procedures for neural network implementation and the development a new class of neural network client/server applications that fit the needs of current clinical information systems. PMID:9357613

  13. Evolving Chart Pattern Sensitive Neural Network Based Forex Trading Agents

    CERN Document Server

    Sher, Gene I

    2011-01-01

    Though machine learning has been applied to the foreign exchange market for quiet some time now, and neural networks have been shown to yield good results, in modern approaches neural network systems are optimized through the traditional methods, and their input signals are vectors containing prices and other indicator elements. The aim of this paper is twofold, the presentation and testing of the application of topology and weight evolving artificial neural network (TWEANN) systems to automated currency trading, and the use of chart images as input to a geometrical regularity aware indirectly encoded neural network systems. This paper presents the benchmark results of neural network based automated currency trading systems evolved using TWEANNs, and compares the generalization capabilities of these direct encoded neural networks which use the standard price vector inputs, and the indirect (substrate) encoded neural networks which use chart images as input. The TWEANN algorithm used to evolve these currency t...

  14. A Projection Neural Network for Constrained Quadratic Minimax Optimization.

    Science.gov (United States)

    Liu, Qingshan; Wang, Jun

    2015-11-01

    This paper presents a projection neural network described by a dynamic system for solving constrained quadratic minimax programming problems. Sufficient conditions based on a linear matrix inequality are provided for global convergence of the proposed neural network. Compared with some of the existing neural networks for quadratic minimax optimization, the proposed neural network in this paper is capable of solving more general constrained quadratic minimax optimization problems, and the designed neural network does not include any parameter. Moreover, the neural network has lower model complexities, the number of state variables of which is equal to that of the dimension of the optimization problems. The simulation results on numerical examples are discussed to demonstrate the effectiveness and characteristics of the proposed neural network.

  15. Neural network models of categorical perception.

    Science.gov (United States)

    Damper, R I; Harnad, S R

    2000-05-01

    Studies of the categorical perception (CP) of sensory continua have a long and rich history in psychophysics. In 1977, Macmillan, Kaplan, and Creelman introduced the use of signal detection theory to CP studies. Anderson and colleagues simultaneously proposed the first neural model for CP, yet this line of research has been less well explored. In this paper, we assess the ability of neural-network models of CP to predict the psychophysical performance of real observers with speech sounds and artificial/novel stimuli. We show that a variety of neural mechanisms are capable of generating the characteristics of CP. Hence, CP may not be a special model of perception but an emergent property of any sufficiently powerful general learning system.

  16. Registration Cost Performance Analysis of a Hierarchical Mobile Internet Protocol Network

    Institute of Scientific and Technical Information of China (English)

    XU Kai; JI Hong; YUE Guang-xin

    2004-01-01

    On the basis of introducing principles for hierarchical mobile Internet protocol networks, the registration cost performance in this network model is analyzed in detail. Furthermore, the functional relationship is also established in the paper among registration cost, hierarchical level number and the maximum handover time for gateway foreign agent regional registration. At last, the registration cost of the hierarchical mobile Internet protocol network is compared with that of the traditional mobile Internet protocol. Theoretic analysis and computer simulation results show that the hierarchical level number and the maximum handover times can both affect the registration cost importantly, when suitable values of which are chosen, the hierarchical network can significantly improve the registration performance compared with the traditional mobile IP.

  17. A SPEECH RECOGNITION METHOD USING COMPETITIVE AND SELECTIVE LEARNING NEURAL NETWORKS

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    On the basis of asymptotic theory of Gersho, the isodistortion principle of vector clustering was discussed and a kind of competitive and selective learning method (CSL) which may avoid local optimization and have excellent result in application to clusters of HMM model was also proposed. In combining the parallel, self-organizational hierarchical neural networks (PSHNN) to reclassify the scores of every form output by HMM, the CSL speech recognition rate is obviously elevated.

  18. Constructing Long Short-Term Memory based Deep Recurrent Neural Networks for Large Vocabulary Speech Recognition

    OpenAIRE

    Li, Xiangang; Wu, Xihong

    2014-01-01

    Long short-term memory (LSTM) based acoustic modeling methods have recently been shown to give state-of-the-art performance on some speech recognition tasks. To achieve a further performance improvement, in this research, deep extensions on LSTM are investigated considering that deep hierarchical model has turned out to be more efficient than a shallow one. Motivated by previous research on constructing deep recurrent neural networks (RNNs), alternative deep LSTM architectures are proposed an...

  19. Hierarchical Real-time Network Traffic Classification Based on ECOC

    Directory of Open Access Journals (Sweden)

    Yaou Zhao

    2013-09-01

    Full Text Available Classification of network traffic is basic and essential for manynetwork researches and managements. With the rapid development ofpeer-to-peer (P2P application using dynamic port disguisingtechniques and encryption to avoid detection, port-based and simplepayload-based network traffic classification methods were diminished.An alternative method based on statistics and machine learning hadattracted researchers' attention in recent years. However, most ofthe proposed algorithms were off-line and usually used a single classifier.In this paper a new hierarchical real-time model was proposed which comprised of a three tuple (source ip, destination ip and destination portlook up table(TT-LUT part and layered milestone part. TT-LUT was used to quickly classify short flows whichneed not to pass the layered milestone part, and milestones in layered milestone partcould classify the other flows in real-time with the real-time feature selection and statistics.Every milestone was a ECOC(Error-Correcting Output Codes based model which was usedto improve classification performance. Experiments showed that the proposedmodel can improve the efficiency of real-time to 80%, and themulti-class classification accuracy encouragingly to 91.4% on the datasets which had been captured from the backbone router in our campus through a week.

  20. Category theoretic analysis of hierarchical protein materials and social networks

    CERN Document Server

    Spivak, David I; Buehler, Markus J

    2011-01-01

    Materials in biology span all the scales from Angstroms to meters and typically consist of complex hierarchical assemblies of simple building blocks. Here we review an application of category theory to describe structural and resulting functional properties of biological protein materials by developing so-called ologs. An olog is like a "concept web" or "semantic network" except that it follows a rigorous mathematical formulation based on category theory. This key difference ensures that an olog is unambiguous, highly adaptable to evolution and change, and suitable for sharing concepts with other ologs. We consider a simple example of an alpha-helical and an amyloid-like protein filament subjected to axial extension and develop an olog representation of their structural and resulting mechanical properties. We also construct a representation of a social network in which people send text-messages to their nearest neighbors and act as a team to perform a task. We show that the olog for the protein and the olog f...

  1. Neural Networks in R Using the Stuttgart Neural Network Simulator: RSNNS

    Directory of Open Access Journals (Sweden)

    Christopher Bergmeir

    2012-01-01

    Full Text Available Neural networks are important standard machine learning procedures for classification and regression. We describe the R package RSNNS that provides a convenient interface to the popular Stuttgart Neural Network Simulator SNNS. The main features are (a encapsulation of the relevant SNNS parts in a C++ class, for sequential and parallel usage of different networks, (b accessibility of all of the SNNSalgorithmic functionality from R using a low-level interface, and (c a high-level interface for convenient, R-style usage of many standard neural network procedures. The package also includes functions for visualization and analysis of the models and the training procedures, as well as functions for data input/output from/to the original SNNSfile formats.

  2. Development of Polymer Resins using Neural Networks

    Directory of Open Access Journals (Sweden)

    Fabiano A. N. Fernandes

    2002-01-01

    Full Text Available The development of polymer resins can benefit from the application of neural networks, using its great ability to correlate inputs and outputs. In this work we have developed a procedure that uses neural networks to correlate the end-user properties of a polymer with the polymerization reactor's operational condition that will produce that desired polymer. This procedure is aimed at speeding up the development of new resins and help finding the appropriate operational conditions to produce a given polymer resin; reducing experimentation, pilot plant tests and therefore time and money spent on development. The procedure shown in this paper can predict the reactor's operational condition with an error lower than 5%.

  3. Neural network correction of astrometric chromaticity

    CERN Document Server

    Gai, M

    2005-01-01

    In this paper we deal with the problem of chromaticity, i.e. apparent position variation of stellar images with their spectral distribution, using neural networks to analyse and process astronomical images. The goal is to remove this relevant source of systematic error in the data reduction of high precision astrometric experiments, like Gaia. This task can be accomplished thanks to the capability of neural networks to solve a nonlinear approximation problem, i.e. to construct an hypersurface that approximates a given set of scattered data couples. Images are encoded associating each of them with conveniently chosen moments, evaluated along the y axis. The technique proposed, in the current framework, reduces the initial chromaticity of few milliarcseconds to values of few microarcseconds.

  4. Design of fiber optic adaline neural networks

    Science.gov (United States)

    Ghosh, Anjan K.; Trepka, Jim

    1997-03-01

    Based on possible optoelectronic realization of adaptive filters and equalizers using fiber optic tapped delay lines and spatial light modulators we describe the design of a single-layer fiber optic Adaline neural network that can be used as a bit pattern classifier. In our design, we employ as few electronic devices as possible and use optical computation to utilize the advantages of optics in processing speed, parallelism, and interconnection. The described new optical neural network design is for optical processing of guided light wave signals, not electronic signals. We analyze the convergence or learning characteristics of the optoelectronic Adaline in the presence of errors in the hardware. We show that with such an optoelectronic Adaline it is possible to detect a desired code word/token/header with good accuracy.

  5. Web Page Categorization Using Artificial Neural Networks

    CERN Document Server

    Kamruzzaman, S M

    2010-01-01

    Web page categorization is one of the challenging tasks in the world of ever increasing web technologies. There are many ways of categorization of web pages based on different approach and features. This paper proposes a new dimension in the way of categorization of web pages using artificial neural network (ANN) through extracting the features automatically. Here eight major categories of web pages have been selected for categorization; these are business & economy, education, government, entertainment, sports, news & media, job search, and science. The whole process of the proposed system is done in three successive stages. In the first stage, the features are automatically extracted through analyzing the source of the web pages. The second stage includes fixing the input values of the neural network; all the values remain between 0 and 1. The variations in those values affect the output. Finally the third stage determines the class of a certain web page out of eight predefined classes. This stage i...

  6. Neural networks for aerosol particles characterization

    Science.gov (United States)

    Berdnik, V. V.; Loiko, V. A.

    2016-11-01

    Multilayer perceptron neural networks with one, two and three inputs are built to retrieve parameters of spherical homogeneous nonabsorbing particle. The refractive index ranges from 1.3 to 1.7; particle radius ranges from 0.251 μm to 56.234 μm. The logarithms of the scattered radiation intensity are used as input signals. The problem of the most informative scattering angles selection is elucidated. It is shown that polychromatic illumination helps one to increase significantly the retrieval accuracy. In the absence of measurement errors relative error of radius retrieval by the neural network with three inputs is 0.54%, relative error of the refractive index retrieval is 0.84%. The effect of measurement errors on the result of retrieval is simulated.

  7. Supervised Sequence Labelling with Recurrent Neural Networks

    CERN Document Server

    Graves, Alex

    2012-01-01

    Supervised sequence labelling is a vital area of machine learning, encompassing tasks such as speech, handwriting and gesture recognition, protein secondary structure prediction and part-of-speech tagging. Recurrent neural networks are powerful sequence learning tools—robust to input noise and distortion, able to exploit long-range contextual information—that would seem ideally suited to such problems. However their role in large-scale sequence labelling systems has so far been auxiliary.    The goal of this book is a complete framework for classifying and transcribing sequential data with recurrent neural networks only. Three main innovations are introduced in order to realise this goal. Firstly, the connectionist temporal classification output layer allows the framework to be trained with unsegmented target sequences, such as phoneme-level speech transcriptions; this is in contrast to previous connectionist approaches, which were dependent on error-prone prior segmentation. Secondly, multidimensional...

  8. Neural Network Program Package for Prosody Modeling

    Directory of Open Access Journals (Sweden)

    J. Santarius

    2004-04-01

    Full Text Available This contribution describes the programme for one part of theautomatic Text-to-Speech (TTS synthesis. Some experiments (for example[14] documented the considerable improvement of the naturalness ofsynthetic speech, but this approach requires completing the inputfeature values by hand. This completing takes a lot of time for bigfiles. We need to improve the prosody by other approaches which useonly automatically classified features (input parameters. Theartificial neural network (ANN approach is used for the modeling ofprosody parameters. The program package contains all modules necessaryfor the text and speech signal pre-processing, neural network training,sensitivity analysis, result processing and a module for the creationof the input data protocol for Czech speech synthesizer ARTIC [1].

  9. Face Recognition using Eigenfaces and Neural Networks

    Directory of Open Access Journals (Sweden)

    Mohamed Rizon

    2006-01-01

    Full Text Available In this study, we develop a computational model to identify the face of an unknown person’s by applying eigenfaces. The eigenfaces has been applied to extract the basic face of the human face images. The eigenfaces is then projecting onto human faces to identify unique features vectors. This significant features vector can be used to identify an unknown face by using the backpropagation neural network that utilized euclidean distance for classification and recognition. The ORL database for this investigation consists of 40 people with various 400 face images had been used for the learning. The eigenfaces including implemented Jacobi’s method for eigenvalues and eigenvectors has been performed. The classification and recognition using backpropagation neural network showed impressive positive result to classify face images.

  10. Multi-Dimensional Recurrent Neural Networks

    CERN Document Server

    Graves, Alex; Schmidhuber, Juergen

    2007-01-01

    Recurrent neural networks (RNNs) have proved effective at one dimensional sequence learning tasks, such as speech and online handwriting recognition. Some of the properties that make RNNs suitable for such tasks, for example robustness to input warping, and the ability to access contextual information, are also desirable in multidimensional domains. However, there has so far been no direct way of applying RNNs to data with more than one spatio-temporal dimension. This paper introduces multi-dimensional recurrent neural networks (MDRNNs), thereby extending the potential applicability of RNNs to vision, video processing, medical imaging and many other areas, while avoiding the scaling problems that have plagued other multi-dimensional models. Experimental results are provided for two image segmentation tasks.

  11. On analog implementations of discrete neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Beiu, V.; Moore, K.R.

    1998-12-01

    The paper will show that in order to obtain minimum size neural networks (i.e., size-optimal) for implementing any Boolean function, the nonlinear activation function of the neutrons has to be the identity function. The authors shall shortly present many results dealing with the approximation capabilities of neural networks, and detail several bounds on the size of threshold gate circuits. Based on a constructive solution for Kolmogorov`s superpositions they will show that implementing Boolean functions can be done using neurons having an identity nonlinear function. It follows that size-optimal solutions can be obtained only using analog circuitry. Conclusions, and several comments on the required precision are ending the paper.

  12. Learning in Neural Networks: VLSI Implementation Strategies

    Science.gov (United States)

    Duong, Tuan Anh

    1995-01-01

    Fully-parallel hardware neural network implementations may be applied to high-speed recognition, classification, and mapping tasks in areas such as vision, or can be used as low-cost self-contained units for tasks such as error detection in mechanical systems (e.g. autos). Learning is required not only to satisfy application requirements, but also to overcome hardware-imposed limitations such as reduced dynamic range of connections.

  13. Applying neural networks to optimize instrumentation performance

    Energy Technology Data Exchange (ETDEWEB)

    Start, S.E.; Peters, G.G.

    1995-06-01

    Well calibrated instrumentation is essential in providing meaningful information about the status of a plant. Signals from plant instrumentation frequently have inherent non-linearities, may be affected by environmental conditions and can therefore cause calibration difficulties for the people who maintain them. Two neural network approaches are described in this paper for improving the accuracy of a non-linear, temperature sensitive level probe ised in Expermental Breeder Reactor II (EBR-II) that was difficult to calibrate.

  14. Identifying Tracks Duplicates via Neural Network

    CERN Document Server

    Sunjerga, Antonio; CERN. Geneva. EP Department

    2017-01-01

    The goal of the project is to study feasibility of state of the art machine learning techniques in track reconstruction. Machine learning techniques provide promising ways to speed up the pattern recognition of tracks by adding more intelligence in the algorithms. Implementation of neural network to process of track duplicates identifying will be discussed. Different approaches are shown and results are compared to method that is currently in use.

  15. Neural Network-Based Hyperspectral Algorithms

    Science.gov (United States)

    2016-06-07

    Neural Network-Based Hyperspectral Algorithms Walter F. Smith, Jr. and Juanita Sandidge Naval Research Laboratory Code 7340, Bldg 1105 Stennis Space...our effort is development of robust numerical inversion algorithms , which will retrieve inherent optical properties of the water column as well as...validate the resulting inversion algorithms with in-situ data and provide estimates of the error bounds associated with the inversion algorithm . APPROACH

  16. Diagnosing process faults using neural network models

    Energy Technology Data Exchange (ETDEWEB)

    Buescher, K.L.; Jones, R.D.; Messina, M.J.

    1993-11-01

    In order to be of use for realistic problems, a fault diagnosis method should have the following three features. First, it should apply to nonlinear processes. Second, it should not rely on extensive amounts of data regarding previous faults. Lastly, it should detect faults promptly. The authors present such a scheme for static (i.e., non-dynamic) systems. It involves using a neural network to create an associative memory whose fixed points represent the normal behavior of the system.

  17. Artificial Neural Networks in Stellar Astronomy

    Directory of Open Access Journals (Sweden)

    R. K. Gulati

    2001-01-01

    Full Text Available Next generation of optical spectroscopic surveys, such as the Sloan Digital Sky Survey and the 2 degree field survey, will provide large stellar databases. New tools will be required to extract useful information from these. We show the applications of artificial neural networks to stellar databases. In another application of this method, we predict spectral and luminosity classes from the catalog of spectral indices. We assess the importance of such methods for stellar populations studies.

  18. Neural Networks with Complex and Quaternion Inputs

    OpenAIRE

    Rishiyur, Adityan

    2006-01-01

    This article investigates Kak neural networks, which can be instantaneously trained, for complex and quaternion inputs. The performance of the basic algorithm has been analyzed and shown how it provides a plausible model of human perception and understanding of images. The motivation for studying quaternion inputs is their use in representing spatial rotations that find applications in computer graphics, robotics, global navigation, computer vision and the spatial orientation of instruments. ...

  19. Adaptive Filtering Using Recurrent Neural Networks

    Science.gov (United States)

    Parlos, Alexander G.; Menon, Sunil K.; Atiya, Amir F.

    2005-01-01

    A method for adaptive (or, optionally, nonadaptive) filtering has been developed for estimating the states of complex process systems (e.g., chemical plants, factories, or manufacturing processes at some level of abstraction) from time series of measurements of system inputs and outputs. The method is based partly on the fundamental principles of the Kalman filter and partly on the use of recurrent neural networks. The standard Kalman filter involves an assumption of linearity of the mathematical model used to describe a process system. The extended Kalman filter accommodates a nonlinear process model but still requires linearization about the state estimate. Both the standard and extended Kalman filters involve the often unrealistic assumption that process and measurement noise are zero-mean, Gaussian, and white. In contrast, the present method does not involve any assumptions of linearity of process models or of the nature of process noise; on the contrary, few (if any) assumptions are made about process models, noise models, or the parameters of such models. In this regard, the method can be characterized as one of nonlinear, nonparametric filtering. The method exploits the unique ability of neural networks to approximate nonlinear functions. In a given case, the process model is limited mainly by limitations of the approximation ability of the neural networks chosen for that case. Moreover, despite the lack of assumptions regarding process noise, the method yields minimum- variance filters. In that they do not require statistical models of noise, the neural- network-based state filters of this method are comparable to conventional nonlinear least-squares estimators.

  20. Neural Networks in Chemical Reaction Dynamics

    CERN Document Server

    Raff, Lionel; Hagan, Martin

    2011-01-01

    This monograph presents recent advances in neural network (NN) approaches and applications to chemical reaction dynamics. Topics covered include: (i) the development of ab initio potential-energy surfaces (PES) for complex multichannel systems using modified novelty sampling and feedforward NNs; (ii) methods for sampling the configuration space of critical importance, such as trajectory and novelty sampling methods and gradient fitting methods; (iii) parametrization of interatomic potential functions using a genetic algorithm accelerated with a NN; (iv) parametrization of analytic interatomic

  1. Sensor Networks Hierarchical Optimization Model for Security Monitoring in High-Speed Railway Transport Hub

    Directory of Open Access Journals (Sweden)

    Zhengyu Xie

    2015-01-01

    Full Text Available We consider the sensor networks hierarchical optimization problem in high-speed railway transport hub (HRTH. The sensor networks are optimized from three hierarchies which are key area sensors optimization, passenger line sensors optimization, and whole area sensors optimization. Case study on a specific HRTH in China showed that the hierarchical optimization method is effective to optimize the sensor networks for security monitoring in HRTH.

  2. A Bionic Neural Network for Fish-Robot Locomotion

    Institute of Scientific and Technical Information of China (English)

    Dai-bing Zhang; De-wen Hu; Lin-cheng Shen; Hai-bin Xie

    2006-01-01

    A bionic neural network for fish-robot locomotion is presented. The bionic neural network inspired from fish neural network consists of one high level controller and one chain of central pattern generators (CPGs). Each CPG contains a nonlinear neural Zhang oscillator which shows properties similar to sine-cosine model. Simulation results show that the bionic neural network presents a good performance in controlling the fish-robot to execute various motions such as startup,stop,forward swimming,backward swimming,turn right and turn left.

  3. Fast implementation of neural network classification

    Science.gov (United States)

    Seo, Guiwon; Ok, Jiheon; Lee, Chulhee

    2013-09-01

    Most artificial neural networks use a nonlinear activation function that includes sigmoid and hyperbolic tangent functions. Most artificial networks employ nonlinear functions such as these sigmoid and hyperbolic tangent functions, which incur high complexity costs, particularly during hardware implementation. In this paper, we propose new polynomial approximation methods for nonlinear activation functions that can substantially reduce complexity without sacrificing performance. The proposed approximation methods were applied to pattern classification problems. Experimental results show that the processing time was reduced by up to 50% without any performance degradations in terms of computer simulation.

  4. Multilingual Text Detection with Nonlinear Neural Network

    Directory of Open Access Journals (Sweden)

    Lin Li

    2015-01-01

    Full Text Available Multilingual text detection in natural scenes is still a challenging task in computer vision. In this paper, we apply an unsupervised learning algorithm to learn language-independent stroke feature and combine unsupervised stroke feature learning and automatically multilayer feature extraction to improve the representational power of text feature. We also develop a novel nonlinear network based on traditional Convolutional Neural Network that is able to detect multilingual text regions in the images. The proposed method is evaluated on standard benchmarks and multilingual dataset and demonstrates improvement over the previous work.

  5. Hindcasting of storm waves using neural networks

    Digital Repository Service at National Institute of Oceanography (India)

    Rao, S.; Mandal, S.

    Department NN neural network net i weighted sum of the inputs of neuron i o k network output at kth output node P total number of training pattern s i output of neuron i t k target output at kth output node 1. Introduction Severe storms occur in Bay of Bengal... useful in the planning and maintenance of marine activities. Wave hindcasting is a non-real time application of numerical wave models in the broad field of climatology. Just as weather conditions, w ij weight from neuron j to neuron i YM Young’s model h a...

  6. Deep learning in neural networks: an overview.

    Science.gov (United States)

    Schmidhuber, Jürgen

    2015-01-01

    In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarizes relevant work, much of it from the previous millennium. Shallow and Deep Learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

  7. Spectral characterization of hierarchical network modularity and limits of modularity detection.

    Directory of Open Access Journals (Sweden)

    Somwrita Sarkar

    Full Text Available Many real world networks are reported to have hierarchically modular organization. However, there exists no algorithm-independent metric to characterize hierarchical modularity in a complex system. The main results of the paper are a set of methods to address this problem. First, classical results from random matrix theory are used to derive the spectrum of a typical stochastic block model hierarchical modular network form. Second, it is shown that hierarchical modularity can be fingerprinted using the spectrum of its largest eigenvalues and gaps between clusters of closely spaced eigenvalues that are well separated from the bulk distribution of eigenvalues around the origin. Third, some well-known results on fingerprinting non-hierarchical modularity in networks automatically follow as special cases, threreby unifying these previously fragmented results. Finally, using these spectral results, it is found that the limits of detection of modularity can be empirically established by studying the mean values of the largest eigenvalues and the limits of the bulk distribution of eigenvalues for an ensemble of networks. It is shown that even when modularity and hierarchical modularity are present in a weak form in the network, they are impossible to detect, because some of the leading eigenvalues fall within the bulk distribution. This provides a threshold for the detection of modularity. Eigenvalue distributions of some technological, social, and biological networks are studied, and the implications of detecting hierarchical modularity in real world networks are discussed.

  8. Hierarchical Structure, Disassortativity and Information Measures of the US Flight Network

    Institute of Scientific and Technical Information of China (English)

    WANG Ru; CAI Xu

    2005-01-01

    @@ We investigate the mixing structure of directed and evolutionary US flight network. It is shown that such a network is a hierarchical network, with average assortativity coefficient -0.37. Application of the informationbased method that can give the same result provides a way to explore the structure of complex networks.

  9. Rule Extraction Algorithm for Deep Neural Networks: A Review

    OpenAIRE

    Hailesilassie, Tameru

    2016-01-01

    Despite the highest classification accuracy in wide varieties of application areas, artificial neural network has one disadvantage. The way this Network comes to a decision is not easily comprehensible. The lack of explanation ability reduces the acceptability of neural network in data mining and decision system. This drawback is the reason why researchers have proposed many rule extraction algorithms to solve the problem. Recently, Deep Neural Network (DNN) is achieving a profound result ove...

  10. Classification of Respiratory Sounds by Using An Artificial Neural Network

    Science.gov (United States)

    2007-11-02

    CLASSIFICATION OF RESPIRATORY SOUNDS BY USING AN ARTIFICIAL NEURAL NETWORK M.C. Sezgin, Z. Dokur, T. Ölmez, M. Korürek Department of Electronics and...successfully classified by the GAL network. Keywords-Respiratory Sounds, Classification of Biomedical Signals, Artificial Neural Network . I. INTRODUCTION...process, feature extraction, and classification by the artificial neural network . At first, the RS signal obtained from a real-time measurement equipment is

  11. Efficient implementation of neural network deinterlacing

    Science.gov (United States)

    Seo, Guiwon; Choi, Hyunsoo; Lee, Chulhee

    2009-02-01

    Interlaced scanning has been widely used in most broadcasting systems. However, there are some undesirable artifacts such as jagged patterns, flickering, and line twitters. Moreover, most recent TV monitors utilize flat panel display technologies such as LCD or PDP monitors and these monitors require progressive formats. Consequently, the conversion of interlaced video into progressive video is required in many applications and a number of deinterlacing methods have been proposed. Recently deinterlacing methods based on neural network have been proposed with good results. On the other hand, with high resolution video contents such as HDTV, the amount of video data to be processed is very large. As a result, the processing time and hardware complexity become an important issue. In this paper, we propose an efficient implementation of neural network deinterlacing using polynomial approximation of the sigmoid function. Experimental results show that these approximations provide equivalent performance with a considerable reduction of complexity. This implementation of neural network deinterlacing can be efficiently incorporated in HW implementation.

  12. File access prediction using neural networks.

    Science.gov (United States)

    Patra, Prashanta Kumar; Sahu, Muktikanta; Mohapatra, Subasish; Samantray, Ronak Kumar

    2010-06-01

    One of the most vexing issues in design of a high-speed computer is the wide gap of access times between the memory and the disk. To solve this problem, static file access predictors have been used. In this paper, we propose dynamic file access predictors using neural networks to significantly improve upon the accuracy, success-per-reference, and effective-success-rate-per-reference by using neural-network-based file access predictor with proper tuning. In particular, we verified that the incorrect prediction has been reduced from 53.11% to 43.63% for the proposed neural network prediction method with a standard configuration than the recent popularity (RP) method. With manual tuning for each trace, we are able to improve upon the misprediction rate and effective-success-rate-per-reference using a standard configuration. Simulations on distributed file system (DFS) traces reveal that exact fit radial basis function (RBF) gives better prediction in high end system whereas multilayer perceptron (MLP) trained with Levenberg-Marquardt (LM) backpropagation outperforms in system having good computational capability. Probabilistic and competitive predictors are the most suitable for work stations having limited resources to deal with and the former predictor is more efficient than the latter for servers having maximum system calls. Finally, we conclude that MLP with LM backpropagation algorithm has better success rate of file prediction than those of simple perceptron, last successor, stable successor, and best k out of m predictors.

  13. Neural Network Approach for Eye Detection

    CERN Document Server

    Vijayalaxmi,; Sreehari, S

    2012-01-01

    Driving support systems, such as car navigation systems are becoming common and they support driver in several aspects. Non-intrusive method of detecting Fatigue and drowsiness based on eye-blink count and eye directed instruction controlhelps the driver to prevent from collision caused by drowsy driving. Eye detection and tracking under various conditions such as illumination, background, face alignment and facial expression makes the problem complex.Neural Network based algorithm is proposed in this paper to detect the eyes efficiently. In the proposed algorithm, first the neural Network is trained to reject the non-eye regionbased on images with features of eyes and the images with features of non-eye using Gabor filter and Support Vector Machines to reduce the dimension and classify efficiently. In the algorithm, first the face is segmented using L*a*btransform color space, then eyes are detected using HSV and Neural Network approach. The algorithm is tested on nearly 100 images of different persons under...

  14. Artificial Neural Network Model for Predicting Compressive

    Directory of Open Access Journals (Sweden)

    Salim T. Yousif

    2013-05-01

    Full Text Available   Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature.    The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor  affecting the output of the model.     The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.

  15. The next generation of neural network chips

    Energy Technology Data Exchange (ETDEWEB)

    Beiu, V.

    1997-08-01

    There have been many national and international neural networks research initiatives: USA (DARPA, NIBS), Canada (IRIS), Japan (HFSP) and Europe (BRAIN, GALA TEA, NERVES, ELENE NERVES 2) -- just to mention a few. Recent developments in the field of neural networks, cognitive science, bioengineering and electrical engineering have made it possible to understand more about the functioning of large ensembles of identical processing elements. There are more research papers than ever proposing solutions and hardware implementations are by no means an exception. Two fields (computing and neuroscience) are interacting in ways nobody could imagine just several years ago, and -- with the advent of new technologies -- researchers are focusing on trying to copy the Brain. Such an exciting confluence may quite shortly lead to revolutionary new computers and it is the aim of this invited session to bring to light some of the challenging research aspects dealing with the hardware realizability of future intelligent chips. Present-day (conventional) technology is (still) mostly digital and, thus, occupies wider areas and consumes much more power than the solutions envisaged. The innovative algorithmic and architectural ideals should represent important breakthroughs, paving the way towards making neural network chips available to the industry at competitive prices, in relatively small packages and consuming a fraction of the power required by equivalent digital solutions.

  16. Phase Transitions in Living Neural Networks

    Science.gov (United States)

    Williams-Garcia, Rashid Vladimir

    Our nervous systems are composed of intricate webs of interconnected neurons interacting in complex ways. These complex interactions result in a wide range of collective behaviors with implications for features of brain function, e.g., information processing. Under certain conditions, such interactions can drive neural network dynamics towards critical phase transitions, where power-law scaling is conjectured to allow optimal behavior. Recent experimental evidence is consistent with this idea and it seems plausible that healthy neural networks would tend towards optimality. This hypothesis, however, is based on two problematic assumptions, which I describe and for which I present alternatives in this thesis. First, critical transitions may vanish due to the influence of an environment, e.g., a sensory stimulus, and so living neural networks may be incapable of achieving "critical" optimality. I develop a framework known as quasicriticality, in which a relative optimality can be achieved depending on the strength of the environmental influence. Second, the power-law scaling supporting this hypothesis is based on statistical analysis of cascades of activity known as neuronal avalanches, which conflate causal and non-causal activity, thus confounding important dynamical information. In this thesis, I present a new method to unveil causal links, known as causal webs, between neuronal activations, thus allowing for experimental tests of the quasicriticality hypothesis and other practical applications.

  17. CALIBRATION OF ONLINE ANALYZERS USING NEURAL NETWORKS

    Energy Technology Data Exchange (ETDEWEB)

    Rajive Ganguli; Daniel E. Walsh; Shaohai Yu

    2003-12-05

    Neural networks were used to calibrate an online ash analyzer at the Usibelli Coal Mine, Healy, Alaska, by relating the Americium and Cesium counts to the ash content. A total of 104 samples were collected from the mine, with 47 being from screened coal, and the rest being from unscreened coal. Each sample corresponded to 20 seconds of coal on the running conveyor belt. Neural network modeling used the quick stop training procedure. Therefore, the samples were split into training, calibration and prediction subsets. Special techniques, using genetic algorithms, were developed to representatively split the sample into the three subsets. Two separate approaches were tried. In one approach, the screened and unscreened coal was modeled separately. In another, a single model was developed for the entire dataset. No advantage was seen from modeling the two subsets separately. The neural network method performed very well on average but not individually, i.e. though each prediction was unreliable, the average of a few predictions was close to the true average. Thus, the method demonstrated that the analyzers were accurate at 2-3 minutes intervals (average of 6-9 samples), but not at 20 seconds (each prediction).

  18. Identifying Broadband Rotational Spectra with Neural Networks

    Science.gov (United States)

    Zaleski, Daniel P.; Prozument, Kirill

    2017-06-01

    A typical broadband rotational spectrum may contain several thousand observable transitions, spanning many species. Identifying the individual spectra, particularly when the dynamic range reaches 1,000:1 or even 10,000:1, can be challenging. One approach is to apply automated fitting routines. In this approach, combinations of 3 transitions can be created to form a "triple", which allows fitting of the A, B, and C rotational constants in a Watson-type Hamiltonian. On a standard desktop computer, with a target molecule of interest, a typical AUTOFIT routine takes 2-12 hours depending on the spectral density. A new approach is to utilize machine learning to train a computer to recognize the patterns (frequency spacing and relative intensities) inherit in rotational spectra and to identify the individual spectra in a raw broadband rotational spectrum. Here, recurrent neural networks have been trained to identify different types of rotational spectra and classify them accordingly. Furthermore, early results in applying convolutional neural networks for spectral object recognition in broadband rotational spectra appear promising. Perez et al. "Broadband Fourier transform rotational spectroscopy for structure determination: The water heptamer." Chem. Phys. Lett., 2013, 571, 1-15. Seifert et al. "AUTOFIT, an Automated Fitting Tool for Broadband Rotational Spectra, and Applications to 1-Hexanal." J. Mol. Spectrosc., 2015, 312, 13-21. Bishop. "Neural networks for pattern recognition." Oxford university press, 1995.

  19. Neural network parameters affecting image classification

    Directory of Open Access Journals (Sweden)

    K.C. Tiwari

    2001-07-01

    Full Text Available The study is to assess the behaviour and impact of various neural network parameters and their effects on the classification accuracy of remotely sensed images which resulted in successful classification of an IRS-1B LISS II image of Roorkee and its surrounding areas using neural network classification techniques. The method can be applied for various defence applications, such as for the identification of enemy troop concentrations and in logistical planning in deserts by identification of suitable areas for vehicular movement. Five parameters, namely training sample size, number of hidden layers, number of hidden nodes, learning rate and momentum factor were selected. In each case, sets of values were decided based on earlier works reported. Neural network-based classifications were carried out for as many as 450 combinations of these parameters. Finally, a graphical analysis of the results obtained was carried out to understand the relationship among these parameters. A table of recommended values for these parameters for achieving 90 per cent and higher classification accuracy was generated and used in classification of an IRS-1B LISS II image. The analysis suggests the existence of an intricate relationship among these parameters and calls for a wider series of classification experiments as also a more intricate analysis of the relationships.

  20. Markovian architectural bias of recurrent neural networks.

    Science.gov (United States)

    Tino, Peter; Cernanský, Michal; Benusková, Lubica

    2004-01-01

    In this paper, we elaborate upon the claim that clustering in the recurrent layer of recurrent neural networks (RNNs) reflects meaningful information processing states even prior to training [1], [2]. By concentrating on activation clusters in RNNs, while not throwing away the continuous state space network dynamics, we extract predictive models that we call neural prediction machines (NPMs). When RNNs with sigmoid activation functions are initialized with small weights (a common technique in the RNN community), the clusters of recurrent activations emerging prior to training are indeed meaningful and correspond to Markov prediction contexts. In this case, the extracted NPMs correspond to a class of Markov models, called variable memory length Markov models (VLMMs). In order to appreciate how much information has really been induced during the training, the RNN performance should always be compared with that of VLMMs and NPMs extracted before training as the "null" base models. Our arguments are supported by experiments on a chaotic symbolic sequence and a context-free language with a deep recursive structure. Index Terms-Complex symbolic sequences, information latching problem, iterative function systems, Markov models, recurrent neural networks (RNNs).