WorldWideScience

Sample records for network learning algorithm

  1. PARALLEL ALGORITHM FOR BAYESIAN NETWORK STRUCTURE LEARNING

    Directory of Open Access Journals (Sweden)

    S. A. Arustamov

    2013-03-01

    Full Text Available The article deals with implementation of a scalable parallel algorithm for structure learning of Bayesian network. Comparative analysis of sequential and parallel algorithms is done.

  2. A Decomposition Algorithm for Learning Bayesian Network Structures from Data

    DEFF Research Database (Denmark)

    Zeng, Yifeng; Cordero Hernandez, Jorge

    2008-01-01

    It is a challenging task of learning a large Bayesian network from a small data set. Most conventional structural learning approaches run into the computational as well as the statistical problems. We propose a decomposition algorithm for the structure construction without having to learn...... the complete network. The new learning algorithm firstly finds local components from the data, and then recover the complete network by joining the learned components. We show the empirical performance of the decomposition algorithm in several benchmark networks....

  3. Learning algorithms for feedforward networks based on finite samples

    Energy Technology Data Exchange (ETDEWEB)

    Rao, N.S.V.; Protopopescu, V.; Mann, R.C.; Oblow, E.M.; Iyengar, S.S.

    1994-09-01

    Two classes of convergent algorithms for learning continuous functions (and also regression functions) that are represented by feedforward networks, are discussed. The first class of algorithms, applicable to networks with unknown weights located only in the output layer, is obtained by utilizing the potential function methods of Aizerman et al. The second class, applicable to general feedforward networks, is obtained by utilizing the classical Robbins-Monro style stochastic approximation methods. Conditions relating the sample sizes to the error bounds are derived for both classes of algorithms using martingale-type inequalities. For concreteness, the discussion is presented in terms of neural networks, but the results are applicable to general feedforward networks, in particular to wavelet networks. The algorithms can be directly adapted to concept learning problems.

  4. Bayesian network structure learning using chaos hybrid genetic algorithm

    Science.gov (United States)

    Shen, Jiajie; Lin, Feng; Sun, Wei; Chang, KC

    2012-06-01

    A new Bayesian network (BN) learning method using a hybrid algorithm and chaos theory is proposed. The principles of mutation and crossover in genetic algorithm and the cloud-based adaptive inertia weight were incorporated into the proposed simple particle swarm optimization (sPSO) algorithm to achieve better diversity, and improve the convergence speed. By means of ergodicity and randomicity of chaos algorithm, the initial network structure population is generated by using chaotic mapping with uniform search under structure constraints. When the algorithm converges to a local minimal, a chaotic searching is started to skip the local minima and to identify a potentially better network structure. The experiment results show that this algorithm can be effectively used for BN structure learning.

  5. Recommending Learning Activities in Social Network Using Data Mining Algorithms

    Science.gov (United States)

    Mahnane, Lamia

    In this paper, we show how data mining algorithms (e.g. Apriori Algorithm (AP) and Collaborative Filtering (CF)) is useful in New Social Network (NSN-AP-CF). "NSN-AP-CF" processes the clusters based on different learning styles. Next, it analyzes the habits and the interests of the users through mining the frequent episodes by the…

  6. Recommending Learning Activities in Social Network Using Data Mining Algorithms

    Science.gov (United States)

    Mahnane, Lamia

    2017-01-01

    In this paper, we show how data mining algorithms (e.g. Apriori Algorithm (AP) and Collaborative Filtering (CF)) is useful in New Social Network (NSN-AP-CF). "NSN-AP-CF" processes the clusters based on different learning styles. Next, it analyzes the habits and the interests of the users through mining the frequent episodes by the…

  7. Projection learning algorithm for threshold - controlled neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Reznik, A.M.

    1995-03-01

    The projection learning algorithm proposed in [1, 2] and further developed in [3] substantially improves the efficiency of memorizing information and accelerates the learning process in neural networks. This algorithm is compatible with the completely connected neural network architecture (the Hopfield network [4]), but its application to other networks involves a number of difficulties. The main difficulties include constraints on interconnection structure and the need to eliminate the state uncertainty of latent neurons if such are present in the network. Despite the encouraging preliminary results of [3], further extension of the applications of the projection algorithm therefore remains problematic. In this paper, which is a continuation of the work begun in [3], we consider threshold-controlled neural networks. Networks of this type are quite common. They represent the receptor neuron layers in some neurocomputer designs. A similar structure is observed in the lower divisions of biological sensory systems [5]. In multilayer projection neural networks with lateral interconnections, the neuron layers or parts of these layers may also have the structure of a threshold-controlled completely connected network. Here the thresholds are the potentials delivered through the projection connections from other parts of the network. The extension of the projection algorithm to the class of threshold-controlled networks may accordingly prove to be useful both for extending its technical applications and for better understanding of the operation of the nervous system in living organisms.

  8. Efficient Algorithms for Bayesian Network Parameter Learning from Incomplete Data

    Science.gov (United States)

    2015-07-01

    Bayesian networks. In IJCNN, pp. 2391– 2396. Ghahramani, Z., & Jordan, M. I. (1997). Factorial hidden markov models. Machine Learning, 29(2-3), 245–273...algorithms like EM (which require inference). 1 INTRODUCTION When learning the parameters of a Bayesian network from data with missing values, the...missing at random assumption (MAR), but also for a broad class of data that is not MAR. Their analysis is based on a graphical representation for

  9. General asymmetric neutral networks and structure design by genetic algorithms: A learning rule for temporal patterns

    Energy Technology Data Exchange (ETDEWEB)

    Bornholdt, S. [Heidelberg Univ., (Germany). Inst., fuer Theoretische Physik; Graudenz, D. [Lawrence Berkeley Lab., CA (United States)

    1993-07-01

    A learning algorithm based on genetic algorithms for asymmetric neural networks with an arbitrary structure is presented. It is suited for the learning of temporal patterns and leads to stable neural networks with feedback.

  10. Validating module network learning algorithms using simulated data.

    Science.gov (United States)

    Michoel, Tom; Maere, Steven; Bonnet, Eric; Joshi, Anagha; Saeys, Yvan; Van den Bulcke, Tim; Van Leemput, Koenraad; van Remortel, Piet; Kuiper, Martin; Marchal, Kathleen; Van de Peer, Yves

    2007-05-03

    In recent years, several authors have used probabilistic graphical models to learn expression modules and their regulatory programs from gene expression data. Despite the demonstrated success of such algorithms in uncovering biologically relevant regulatory relations, further developments in the area are hampered by a lack of tools to compare the performance of alternative module network learning strategies. Here, we demonstrate the use of the synthetic data generator SynTReN for the purpose of testing and comparing module network learning algorithms. We introduce a software package for learning module networks, called LeMoNe, which incorporates a novel strategy for learning regulatory programs. Novelties include the use of a bottom-up Bayesian hierarchical clustering to construct the regulatory programs, and the use of a conditional entropy measure to assign regulators to the regulation program nodes. Using SynTReN data, we test the performance of LeMoNe in a completely controlled situation and assess the effect of the methodological changes we made with respect to an existing software package, namely Genomica. Additionally, we assess the effect of various parameters, such as the size of the data set and the amount of noise, on the inference performance. Overall, application of Genomica and LeMoNe to simulated data sets gave comparable results. However, LeMoNe offers some advantages, one of them being that the learning process is considerably faster for larger data sets. Additionally, we show that the location of the regulators in the LeMoNe regulation programs and their conditional entropy may be used to prioritize regulators for functional validation, and that the combination of the bottom-up clustering strategy with the conditional entropy-based assignment of regulators improves the handling of missing or hidden regulators. We show that data simulators such as SynTReN are very well suited for the purpose of developing, testing and improving module network

  11. A constructive algorithm for unsupervised learning with incremental neural network

    Directory of Open Access Journals (Sweden)

    Jenq-Haur Wang

    2015-04-01

    In our experiment, Reuters-21578 was used as the dataset to show the effectiveness of the proposed method on text classification. The experimental results showed that our method can effectively classify texts with the best F1-measure of 92.5%. It also showed the learning algorithm can enhance the accuracy effectively and efficiently. This framework also validates scalability in terms of the network size, in which the training and testing times both showed a constant trend. This also validates the feasibility of the method for practical uses.

  12. Neural-Network-Biased Genetic Algorithms for Materials Design: Evolutionary Algorithms That Learn.

    Science.gov (United States)

    Patra, Tarak K; Meenakshisundaram, Venkatesh; Hung, Jui-Hsiang; Simmons, David S

    2017-02-13

    Machine learning has the potential to dramatically accelerate high-throughput approaches to materials design, as demonstrated by successes in biomolecular design and hard materials design. However, in the search for new soft materials exhibiting properties and performance beyond those previously achieved, machine learning approaches are frequently limited by two shortcomings. First, because they are intrinsically interpolative, they are better suited to the optimization of properties within the known range of accessible behavior than to the discovery of new materials with extremal behavior. Second, they require large pre-existing data sets, which are frequently unavailable and prohibitively expensive to produce. Here we describe a new strategy, the neural-network-biased genetic algorithm (NBGA), for combining genetic algorithms, machine learning, and high-throughput computation or experiment to discover materials with extremal properties in the absence of pre-existing data. Within this strategy, predictions from a progressively constructed artificial neural network are employed to bias the evolution of a genetic algorithm, with fitness evaluations performed via direct simulation or experiment. In effect, this strategy gives the evolutionary algorithm the ability to "learn" and draw inferences from its experience to accelerate the evolutionary process. We test this algorithm against several standard optimization problems and polymer design problems and demonstrate that it matches and typically exceeds the efficiency and reproducibility of standard approaches including a direct-evaluation genetic algorithm and a neural-network-evaluated genetic algorithm. The success of this algorithm in a range of test problems indicates that the NBGA provides a robust strategy for employing informatics-accelerated high-throughput methods to accelerate materials design in the absence of pre-existing data.

  13. Machine Learning for Information Retrieval: Neural Networks, Symbolic Learning, and Genetic Algorithms.

    Science.gov (United States)

    Chen, Hsinchun

    1995-01-01

    Presents an overview of artificial-intelligence-based inductive learning techniques and their use in information science research. Three methods are discussed: the connectionist Hopfield network; the symbolic ID3/ID5R; evolution-based genetic algorithms. The knowledge representations and algorithms of these methods are examined in the context of…

  14. Neuromorphic implementations of neurobiological learning algorithms for spiking neural networks.

    Science.gov (United States)

    Walter, Florian; Röhrbein, Florian; Knoll, Alois

    2015-12-01

    The application of biologically inspired methods in design and control has a long tradition in robotics. Unlike previous approaches in this direction, the emerging field of neurorobotics not only mimics biological mechanisms at a relatively high level of abstraction but employs highly realistic simulations of actual biological nervous systems. Even today, carrying out these simulations efficiently at appropriate timescales is challenging. Neuromorphic chip designs specially tailored to this task therefore offer an interesting perspective for neurorobotics. Unlike Von Neumann CPUs, these chips cannot be simply programmed with a standard programming language. Like real brains, their functionality is determined by the structure of neural connectivity and synaptic efficacies. Enabling higher cognitive functions for neurorobotics consequently requires the application of neurobiological learning algorithms to adjust synaptic weights in a biologically plausible way. In this paper, we therefore investigate how to program neuromorphic chips by means of learning. First, we provide an overview over selected neuromorphic chip designs and analyze them in terms of neural computation, communication systems and software infrastructure. On the theoretical side, we review neurobiological learning techniques. Based on this overview, we then examine on-die implementations of these learning algorithms on the considered neuromorphic chips. A final discussion puts the findings of this work into context and highlights how neuromorphic hardware can potentially advance the field of autonomous robot systems. The paper thus gives an in-depth overview of neuromorphic implementations of basic mechanisms of synaptic plasticity which are required to realize advanced cognitive capabilities with spiking neural networks. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. The No-Prop algorithm: a new learning algorithm for multilayer neural networks.

    Science.gov (United States)

    Widrow, Bernard; Greenblatt, Aaron; Kim, Youngsik; Park, Dookun

    2013-01-01

    A new learning algorithm for multilayer neural networks that we have named No-Propagation (No-Prop) is hereby introduced. With this algorithm, the weights of the hidden-layer neurons are set and fixed with random values. Only the weights of the output-layer neurons are trained, using steepest descent to minimize mean square error, with the LMS algorithm of Widrow and Hoff. The purpose of introducing nonlinearity with the hidden layers is examined from the point of view of Least Mean Square Error Capacity (LMS Capacity), which is defined as the maximum number of distinct patterns that can be trained into the network with zero error. This is shown to be equal to the number of weights of each of the output-layer neurons. The No-Prop algorithm and the Back-Prop algorithm are compared. Our experience with No-Prop is limited, but from the several examples presented here, it seems that the performance regarding training and generalization of both algorithms is essentially the same when the number of training patterns is less than or equal to LMS Capacity. When the number of training patterns exceeds Capacity, Back-Prop is generally the better performer. But equivalent performance can be obtained with No-Prop by increasing the network Capacity by increasing the number of neurons in the hidden layer that drives the output layer. The No-Prop algorithm is much simpler and easier to implement than Back-Prop. Also, it converges much faster. It is too early to definitively say where to use one or the other of these algorithms. This is still a work in progress. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. SAGA: a hybrid search algorithm for Bayesian Network structure learning of transcriptional regulatory networks.

    Science.gov (United States)

    Adabor, Emmanuel S; Acquaah-Mensah, George K; Oduro, Francis T

    2015-02-01

    Bayesian Networks have been used for the inference of transcriptional regulatory relationships among genes, and are valuable for obtaining biological insights. However, finding optimal Bayesian Network (BN) is NP-hard. Thus, heuristic approaches have sought to effectively solve this problem. In this work, we develop a hybrid search method combining Simulated Annealing with a Greedy Algorithm (SAGA). SAGA explores most of the search space by undergoing a two-phase search: first with a Simulated Annealing search and then with a Greedy search. Three sets of background-corrected and normalized microarray datasets were used to test the algorithm. BN structure learning was also conducted using the datasets, and other established search methods as implemented in BANJO (Bayesian Network Inference with Java Objects). The Bayesian Dirichlet Equivalence (BDe) metric was used to score the networks produced with SAGA. SAGA predicted transcriptional regulatory relationships among genes in networks that evaluated to higher BDe scores with high sensitivities and specificities. Thus, the proposed method competes well with existing search algorithms for Bayesian Network structure learning of transcriptional regulatory networks. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. A constructive algorithm for unsupervised learning with incremental neural network

    OpenAIRE

    Wang, Jenq-Haur; Wang, Hsin-Yang; Chen, Yen-Lin; Liu, Chuan-Ming

    2015-01-01

    Artificial neural network (ANN) has wide applications such as data processing and classification. However, comparing with other classification methods, ANN needs enormous memory space and training time to build the model. This makes ANN infeasible in practical applications. In this paper, we try to integrate the ideas of human learning mechanism with the existing models of ANN. We propose an incremental neural network construction framework for unsupervised learning. In this framework, a neur...

  18. Quality of Service Issues for Reinforcement Learning Based Routing Algorithm for Ad-Hoc Networks

    OpenAIRE

    Kulkarni, Shrirang Ambaji; Rao, G. Raghavendra

    2012-01-01

    Mobile ad-hoc networks are dynamic networks which are decentralized and autonomous in nature. Many routing algorithms have been proposed for these dynamic networks. It is an important problem to model Quality of Service requirements on these types of algorithms which traditionally have certain limitations. To model this scenario we have considered a reinforcement learning algorithm SAMPLE. SAMPLE promises to deal effectively with congestion and under high traffic load. As it is natural for ad...

  19. A robust regularization algorithm for polynomial networks for machine learning

    Science.gov (United States)

    Jaenisch, Holger M.; Handley, James W.

    2011-06-01

    We present an improvement to the fundamental Group Method of Data Handling (GMDH) Data Modeling algorithm that overcomes the parameter sensitivity to novel cases presented to derived networks. We achieve this result by regularization of the output and using a genetic weighting that selects intermediate models that do not exhibit divergence. The result is the derivation of multi-nested polynomial networks following the Kolmogorov-Gabor polynomial that are robust to mean estimators as well as novel exemplars for input. The full details of the algorithm are presented. We also introduce a new method for approximating GMDH in a single regression model using F, H, and G terms that automatically exports the answers as ordinary differential equations. The MathCAD 15 source code for all algorithms and results are provided.

  20. A Formal Verification Model for Performance Analysis of Reinforcement Learning Algorithms Applied t o Dynamic Networks

    OpenAIRE

    Shrirang Ambaji KULKARNI; Raghavendra G . RAO

    2017-01-01

    Routing data packets in a dynamic network is a difficult and important problem in computer networks. As the network is dynamic, it is subject to frequent topology changes and is subject to variable link costs due to congestion and bandwidth. Existing shortest path algorithms fail to converge to better solutions under dynamic network conditions. Reinforcement learning algorithms posses better adaptation techniques in dynamic environments. In this paper we apply model based Q-Routing technique ...

  1. A study on the performance comparison of metaheuristic algorithms on the learning of neural networks

    Science.gov (United States)

    Lai, Kee Huong; Zainuddin, Zarita; Ong, Pauline

    2017-08-01

    The learning or training process of neural networks entails the task of finding the most optimal set of parameters, which includes translation vectors, dilation parameter, synaptic weights, and bias terms. Apart from the traditional gradient descent-based methods, metaheuristic methods can also be used for this learning purpose. Since the inception of genetic algorithm half a century ago, the last decade witnessed the explosion of a variety of novel metaheuristic algorithms, such as harmony search algorithm, bat algorithm, and whale optimization algorithm. Despite the proof of the no free lunch theorem in the discipline of optimization, a survey in the literature of machine learning gives contrasting results. Some researchers report that certain metaheuristic algorithms are superior to the others, whereas some others argue that different metaheuristic algorithms give comparable performance. As such, this paper aims to investigate if a certain metaheuristic algorithm will outperform the other algorithms. In this work, three metaheuristic algorithms, namely genetic algorithms, particle swarm optimization, and harmony search algorithm are considered. The algorithms are incorporated in the learning of neural networks and their classification results on the benchmark UCI machine learning data sets are compared. It is found that all three metaheuristic algorithms give similar and comparable performance, as captured in the average overall classification accuracy. The results corroborate the findings reported in the works done by previous researchers. Several recommendations are given, which include the need of statistical analysis to verify the results and further theoretical works to support the obtained empirical results.

  2. LEARNING ALGORITHM EFFECT ON MULTILAYER FEED FORWARD ARTIFICIAL NEURAL NETWORK PERFORMANCE IN IMAGE CODING

    Directory of Open Access Journals (Sweden)

    OMER MAHMOUD

    2007-08-01

    Full Text Available One of the essential factors that affect the performance of Artificial Neural Networks is the learning algorithm. The performance of Multilayer Feed Forward Artificial Neural Network performance in image compression using different learning algorithms is examined in this paper. Based on Gradient Descent, Conjugate Gradient, Quasi-Newton techniques three different error back propagation algorithms have been developed for use in training two types of neural networks, a single hidden layer network and three hidden layers network. The essence of this study is to investigate the most efficient and effective training methods for use in image compression and its subsequent applications. The obtained results show that the Quasi-Newton based algorithm has better performance as compared to the other two algorithms.

  3. A Probability-based Evolutionary Algorithm with Mutations to Learn Bayesian Networks

    Directory of Open Access Journals (Sweden)

    Sho Fukuda

    2014-12-01

    Full Text Available Bayesian networks are regarded as one of the essential tools to analyze causal relationship between events from data. To learn the structure of highly-reliable Bayesian networks from data as quickly as possible is one of the important problems that several studies have been tried to achieve. In recent years, probability-based evolutionary algorithms have been proposed as a new efficient approach to learn Bayesian networks. In this paper, we target on one of the probability-based evolutionary algorithms called PBIL (Probability-Based Incremental Learning, and propose a new mutation operator. Through performance evaluation, we found that the proposed mutation operator has a good performance in learning Bayesian networks

  4. A Hybrid Constructive Algorithm for Single-Layer Feedforward Networks Learning.

    Science.gov (United States)

    Wu, Xing; Rózycki, Paweł; Wilamowski, Bogdan M

    2015-08-01

    Single-layer feedforward networks (SLFNs) have been proven to be a universal approximator when all the parameters are allowed to be adjustable. It is widely used in classification and regression problems. The SLFN learning involves two tasks: determining network size and training the parameters. Most current algorithms could not be satisfactory to both sides. Some algorithms focused on construction and only tuned part of the parameters, which may not be able to achieve a compact network. Other gradient-based optimization algorithms focused on parameters tuning while the network size has to be preset by the user. Therefore, trial-and-error approach has to be used to search the optimal network size. Because results of each trial cannot be reused in another trial, it costs much computation. In this paper, a hybrid constructive (HC)algorithm is proposed for SLFN learning, which can train all the parameters and determine the network size simultaneously. At first, by combining Levenberg-Marquardt algorithm and least-square method, a hybrid algorithm is presented for training SLFN with fixed network size. Then,with the hybrid algorithm, an incremental constructive scheme is proposed. A new randomly initialized neuron is added each time when the training entrapped into local minima. Because the training continued on previous results after adding new neurons, the proposed HC algorithm works efficiently. Several practical problems were given for comparison with other popular algorithms. The experimental results demonstrated that the HC algorithm worked more efficiently than those optimization methods with trial and error, and could achieve much more compact SLFN than those construction algorithms.

  5. Review of Recommender Systems Algorithms Utilized in Social Networks based e-Learning Systems & Neutrosophic System

    Directory of Open Access Journals (Sweden)

    A. A. Salama

    2015-03-01

    Full Text Available In this paper, we present a review of different recommender system algorithms that are utilized in social networks based e-Learning systems. Future research will include our proposed our e-Learning system that utilizes Recommender System and Social Network. Since the world is full of indeterminacy, the neutrosophics found their place into contemporary research. The fundamental concepts of neutrosophic set, introduced by Smarandache in [21, 22, 23] and Salama et al. in [24-66].The purpose of this paper is to utilize a neutrosophic set to analyze social networks data conducted through learning activities.

  6. A Circuit-Based Neural Network with Hybrid Learning of Backpropagation and Random Weight Change Algorithms

    Science.gov (United States)

    Yang, Changju; Kim, Hyongsuk; Adhikari, Shyam Prasad; Chua, Leon O.

    2016-01-01

    A hybrid learning method of a software-based backpropagation learning and a hardware-based RWC learning is proposed for the development of circuit-based neural networks. The backpropagation is known as one of the most efficient learning algorithms. A weak point is that its hardware implementation is extremely difficult. The RWC algorithm, which is very easy to implement with respect to its hardware circuits, takes too many iterations for learning. The proposed learning algorithm is a hybrid one of these two. The main learning is performed with a software version of the BP algorithm, firstly, and then, learned weights are transplanted on a hardware version of a neural circuit. At the time of the weight transplantation, a significant amount of output error would occur due to the characteristic difference between the software and the hardware. In the proposed method, such error is reduced via a complementary learning of the RWC algorithm, which is implemented in a simple hardware. The usefulness of the proposed hybrid learning system is verified via simulations upon several classical learning problems. PMID:28025566

  7. A Formal Verification Model for Performance Analysis of Reinforcement Learning Algorithms Applied t o Dynamic Networks

    Directory of Open Access Journals (Sweden)

    Shrirang Ambaji KULKARNI

    2017-04-01

    Full Text Available Routing data packets in a dynamic network is a difficult and important problem in computer networks. As the network is dynamic, it is subject to frequent topology changes and is subject to variable link costs due to congestion and bandwidth. Existing shortest path algorithms fail to converge to better solutions under dynamic network conditions. Reinforcement learning algorithms posses better adaptation techniques in dynamic environments. In this paper we apply model based Q-Routing technique for routing in dynamic network. To analyze the correctness of Q-Routing algorithms mathematically, we provide a proof and also implement a SPIN based verification model. We also perform simulation based analysis of Q-Routing for given metrics.

  8. Adjoint-operators and non-adiabatic learning algorithms in neural networks

    Science.gov (United States)

    Toomarian, N.; Barhen, J.

    1991-01-01

    Adjoint sensitivity equations are presented, which can be solved simultaneously (i.e., forward in time) with the dynamics of a nonlinear neural network. These equations provide the foundations for a new methodology which enables the implementation of temporal learning algorithms in a highly efficient manner.

  9. Simulating Visual Learning and Optical Illusions via a Network-Based Genetic Algorithm

    Science.gov (United States)

    Siu, Theodore; Vivar, Miguel; Shinbrot, Troy

    We present a neural network model that uses a genetic algorithm to identify spatial patterns. We show that the model both learns and reproduces common visual patterns and optical illusions. Surprisingly, we find that the illusions generated are a direct consequence of the network architecture used. We discuss the implications of our results and the insights that we gain on how humans fall for optical illusions

  10. Learning Networks, Networked Learning

    NARCIS (Netherlands)

    Sloep, Peter; Berlanga, Adriana

    2010-01-01

    Sloep, P. B., & Berlanga, A. J. (2011). Learning Networks, Networked Learning [Redes de Aprendizaje, Aprendizaje en Red]. Comunicar, XIX(37), 55-63. Retrieved from http://dx.doi.org/10.3916/C37-2011-02-05

  11. Structure Learning of Bayesian Networks by Estimation of Distribution Algorithms with Transpose Mutation

    Directory of Open Access Journals (Sweden)

    D.W. Kim

    2013-08-01

    Full Text Available Estimation of distribution algorithms (EDAs constitute a new branch of evolutionary optimization algorithms that were developed as a natural alternative to genetic algorithms (GAs. Several studies have demonstrated that the heuristic scheme of EDAs is effective and efficient for many optimization problems. Recently, it has been reported that the incorporation of mutation into EDAs increases the diversity of genetic information in the population, thereby avoiding premature convergence into a suboptimal solution. In this study, we propose a new mutation operator, a transpose mutation, designed for Bayesian structure learning. It enhances the diversity of the offspring and it increases the possibility of inferring the correct arc direction by considering the arc directions in candidate solutions as bi-directional, using the matrix transpose operator. As compared to the conventional EDAs, the transpose mutation-adopted EDAs are superior and effective algorithms for learning Bayesian networks.

  12. Interactive Algorithms for Teaching and Learning Acute Medicine in the Network of Medical Faculties MEFANET

    Science.gov (United States)

    Štourač, Petr; Komenda, Martin; Harazim, Hana; Kosinová, Martina; Gregor, Jakub; Hůlek, Richard; Smékalová, Olga; Křikava, Ivo; Štoudek, Roman; Dušek, Ladislav

    2013-01-01

    Background Medical Faculties Network (MEFANET) has established itself as the authority for setting standards for medical educators in the Czech Republic and Slovakia, 2 independent countries with similar languages that once comprised a federation and that still retain the same curricular structure for medical education. One of the basic goals of the network is to advance medical teaching and learning with the use of modern information and communication technologies. Objective We present the education portal AKUTNE.CZ as an important part of the MEFANET’s content. Our focus is primarily on simulation-based tools for teaching and learning acute medicine issues. Methods Three fundamental elements of the MEFANET e-publishing system are described: (1) medical disciplines linker, (2) authentication/authorization framework, and (3) multidimensional quality assessment. A new set of tools for technology-enhanced learning have been introduced recently: Sandbox (works in progress), WikiLectures (collaborative content authoring), Moodle-MEFANET (central learning management system), and Serious Games (virtual casuistics and interactive algorithms). The latest development in MEFANET is designed for indexing metadata about simulation-based learning objects, also known as electronic virtual patients or virtual clinical cases. The simulations assume the form of interactive algorithms for teaching and learning acute medicine. An anonymous questionnaire of 10 items was used to explore students’ attitudes and interests in using the interactive algorithms as part of their medical or health care studies. Data collection was conducted over 10 days in February 2013. Results In total, 25 interactive algorithms in the Czech and English languages have been developed and published on the AKUTNE.CZ education portal to allow the users to test and improve their knowledge and skills in the field of acute medicine. In the feedback survey, 62 participants completed the online questionnaire (13

  13. Interactive algorithms for teaching and learning acute medicine in the network of medical faculties MEFANET.

    Science.gov (United States)

    Schwarz, Daniel; Štourač, Petr; Komenda, Martin; Harazim, Hana; Kosinová, Martina; Gregor, Jakub; Hůlek, Richard; Smékalová, Olga; Křikava, Ivo; Štoudek, Roman; Dušek, Ladislav

    2013-07-08

    Medical Faculties Network (MEFANET) has established itself as the authority for setting standards for medical educators in the Czech Republic and Slovakia, 2 independent countries with similar languages that once comprised a federation and that still retain the same curricular structure for medical education. One of the basic goals of the network is to advance medical teaching and learning with the use of modern information and communication technologies. We present the education portal AKUTNE.CZ as an important part of the MEFANET's content. Our focus is primarily on simulation-based tools for teaching and learning acute medicine issues. Three fundamental elements of the MEFANET e-publishing system are described: (1) medical disciplines linker, (2) authentication/authorization framework, and (3) multidimensional quality assessment. A new set of tools for technology-enhanced learning have been introduced recently: Sandbox (works in progress), WikiLectures (collaborative content authoring), Moodle-MEFANET (central learning management system), and Serious Games (virtual casuistics and interactive algorithms). The latest development in MEFANET is designed for indexing metadata about simulation-based learning objects, also known as electronic virtual patients or virtual clinical cases. The simulations assume the form of interactive algorithms for teaching and learning acute medicine. An anonymous questionnaire of 10 items was used to explore students' attitudes and interests in using the interactive algorithms as part of their medical or health care studies. Data collection was conducted over 10 days in February 2013. In total, 25 interactive algorithms in the Czech and English languages have been developed and published on the AKUTNE.CZ education portal to allow the users to test and improve their knowledge and skills in the field of acute medicine. In the feedback survey, 62 participants completed the online questionnaire (13.5%) from the total 460 addressed

  14. Behavioral Profiling of Scada Network Traffic Using Machine Learning Algorithms

    Science.gov (United States)

    2014-03-27

    which represent potentially insecure mechanisms for authentication and authorization. 9 An example of a hacker penetrating network security at a...transport most application 35 traffic such as Lightweight Directory Access Protocol ( LDAP ), HTTP, HyperText Transfer Protocol Secure (HTTPS), POP3...Department of Homeland Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 DPI deep packet inspection

  15. A fast and accurate online sequential learning algorithm for feedforward networks.

    Science.gov (United States)

    Liang, Nan-Ying; Huang, Guang-Bin; Saratchandran, P; Sundararajan, N

    2006-11-01

    In this paper, we develop an online sequential learning algorithm for single hidden layer feedforward networks (SLFNs) with additive or radial basis function (RBF) hidden nodes in a unified framework. The algorithm is referred to as online sequential extreme learning machine (OS-ELM) and can learn data one-by-one or chunk-by-chunk (a block of data) with fixed or varying chunk size. The activation functions for additive nodes in OS-ELM can be any bounded nonconstant piecewise continuous functions and the activation functions for RBF nodes can be any integrable piecewise continuous functions. In OS-ELM, the parameters of hidden nodes (the input weights and biases of additive nodes or the centers and impact factors of RBF nodes) are randomly selected and the output weights are analytically determined based on the sequentially arriving data. The algorithm uses the ideas of ELM of Huang et al. developed for batch learning which has been shown to be extremely fast with generalization performance better than other batch training methods. Apart from selecting the number of hidden nodes, no other control parameters have to be manually chosen. Detailed performance comparison of OS-ELM is done with other popular sequential learning algorithms on benchmark problems drawn from the regression, classification and time series prediction areas. The results show that the OS-ELM is faster than the other sequential algorithms and produces better generalization performance.

  16. Short-Term Load Forecasting Using Adaptive Annealing Learning Algorithm Based Reinforcement Neural Network

    Directory of Open Access Journals (Sweden)

    Cheng-Ming Lee

    2016-11-01

    Full Text Available A reinforcement learning algorithm is proposed to improve the accuracy of short-term load forecasting (STLF in this article. The proposed model integrates radial basis function neural network (RBFNN, support vector regression (SVR, and adaptive annealing learning algorithm (AALA. In the proposed methodology, firstly, the initial structure of RBFNN is determined by using an SVR. Then, an AALA with time-varying learning rates is used to optimize the initial parameters of SVR-RBFNN (AALA-SVR-RBFNN. In order to overcome the stagnation for searching optimal RBFNN, a particle swarm optimization (PSO is applied to simultaneously find promising learning rates in AALA. Finally, the short-term load demands are predicted by using the optimal RBFNN. The performance of the proposed methodology is verified on the actual load dataset from the Taiwan Power Company (TPC. Simulation results reveal that the proposed AALA-SVR-RBFNN can achieve a better load forecasting precision compared to various RBFNNs.

  17. A growing and pruning sequential learning algorithm of hyper basis function neural network for function approximation.

    Science.gov (United States)

    Vuković, Najdan; Miljković, Zoran

    2013-10-01

    Radial basis function (RBF) neural network is constructed of certain number of RBF neurons, and these networks are among the most used neural networks for modeling of various nonlinear problems in engineering. Conventional RBF neuron is usually based on Gaussian type of activation function with single width for each activation function. This feature restricts neuron performance for modeling the complex nonlinear problems. To accommodate limitation of a single scale, this paper presents neural network with similar but yet different activation function-hyper basis function (HBF). The HBF allows different scaling of input dimensions to provide better generalization property when dealing with complex nonlinear problems in engineering practice. The HBF is based on generalization of Gaussian type of neuron that applies Mahalanobis-like distance as a distance metrics between input training sample and prototype vector. Compared to the RBF, the HBF neuron has more parameters to optimize, but HBF neural network needs less number of HBF neurons to memorize relationship between input and output sets in order to achieve good generalization property. However, recent research results of HBF neural network performance have shown that optimal way of constructing this type of neural network is needed; this paper addresses this issue and modifies sequential learning algorithm for HBF neural network that exploits the concept of neuron's significance and allows growing and pruning of HBF neuron during learning process. Extensive experimental study shows that HBF neural network, trained with developed learning algorithm, achieves lower prediction error and more compact neural network. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. A supervised multi-spike learning algorithm based on gradient descent for spiking neural networks.

    Science.gov (United States)

    Xu, Yan; Zeng, Xiaoqin; Han, Lixin; Yang, Jing

    2013-07-01

    We use a supervised multi-spike learning algorithm for spiking neural networks (SNNs) with temporal encoding to simulate the learning mechanism of biological neurons in which the SNN output spike trains are encoded by firing times. We first analyze why existing gradient-descent-based learning methods for SNNs have difficulty in achieving multi-spike learning. We then propose a new multi-spike learning method for SNNs based on gradient descent that solves the problems of error function construction and interference among multiple output spikes during learning. The method could be widely applied to single spiking neurons to learn desired output spike trains and to multilayer SNNs to solve classification problems. By overcoming learning interference among multiple spikes, our method has high learning accuracy when there are a relatively large number of output spikes in need of learning. We also develop an output encoding strategy with respect to multiple spikes for classification problems. This effectively improves the classification accuracy of multi-spike learning compared to that of single-spike learning. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Online Learning Algorithm for Time Series Forecasting Suitable for Low Cost Wireless Sensor Networks Nodes

    Directory of Open Access Journals (Sweden)

    Juan Pardo

    2015-04-01

    Full Text Available Time series forecasting is an important predictive methodology which can be applied to a wide range of problems. Particularly, forecasting the indoor temperature permits an improved utilization of the HVAC (Heating, Ventilating and Air Conditioning systems in a home and thus a better energy efficiency. With such purpose the paper describes how to implement an Artificial Neural Network (ANN algorithm in a low cost system-on-chip to develop an autonomous intelligent wireless sensor network. The present paper uses a Wireless Sensor Networks (WSN to monitor and forecast the indoor temperature in a smart home, based on low resources and cost microcontroller technology as the 8051MCU. An on-line learning approach, based on Back-Propagation (BP algorithm for ANNs, has been developed for real-time time series learning. It performs the model training with every new data that arrive to the system, without saving enormous quantities of data to create a historical database as usual, i.e., without previous knowledge. Consequently to validate the approach a simulation study through a Bayesian baseline model have been tested in order to compare with a database of a real application aiming to see the performance and accuracy. The core of the paper is a new algorithm, based on the BP one, which has been described in detail, and the challenge was how to implement a computational demanding algorithm in a simple architecture with very few hardware resources.

  20. Online learning algorithm for time series forecasting suitable for low cost wireless sensor networks nodes.

    Science.gov (United States)

    Pardo, Juan; Zamora-Martínez, Francisco; Botella-Rocamora, Paloma

    2015-04-21

    Time series forecasting is an important predictive methodology which can be applied to a wide range of problems. Particularly, forecasting the indoor temperature permits an improved utilization of the HVAC (Heating, Ventilating and Air Conditioning) systems in a home and thus a better energy efficiency. With such purpose the paper describes how to implement an Artificial Neural Network (ANN) algorithm in a low cost system-on-chip to develop an autonomous intelligent wireless sensor network. The present paper uses a Wireless Sensor Networks (WSN) to monitor and forecast the indoor temperature in a smart home, based on low resources and cost microcontroller technology as the 8051MCU. An on-line learning approach, based on Back-Propagation (BP) algorithm for ANNs, has been developed for real-time time series learning. It performs the model training with every new data that arrive to the system, without saving enormous quantities of data to create a historical database as usual, i.e., without previous knowledge. Consequently to validate the approach a simulation study through a Bayesian baseline model have been tested in order to compare with a database of a real application aiming to see the performance and accuracy. The core of the paper is a new algorithm, based on the BP one, which has been described in detail, and the challenge was how to implement a computational demanding algorithm in a simple architecture with very few hardware resources.

  1. Automatic classification of schizophrenia using resting-state functional language network via an adaptive learning algorithm

    Science.gov (United States)

    Zhu, Maohu; Jie, Nanfeng; Jiang, Tianzi

    2014-03-01

    A reliable and precise classification of schizophrenia is significant for its diagnosis and treatment of schizophrenia. Functional magnetic resonance imaging (fMRI) is a novel tool increasingly used in schizophrenia research. Recent advances in statistical learning theory have led to applying pattern classification algorithms to access the diagnostic value of functional brain networks, discovered from resting state fMRI data. The aim of this study was to propose an adaptive learning algorithm to distinguish schizophrenia patients from normal controls using resting-state functional language network. Furthermore, here the classification of schizophrenia was regarded as a sample selection problem where a sparse subset of samples was chosen from the labeled training set. Using these selected samples, which we call informative vectors, a classifier for the clinic diagnosis of schizophrenia was established. We experimentally demonstrated that the proposed algorithm incorporating resting-state functional language network achieved 83.6% leaveone- out accuracy on resting-state fMRI data of 27 schizophrenia patients and 28 normal controls. In contrast with KNearest- Neighbor (KNN), Support Vector Machine (SVM) and l1-norm, our method yielded better classification performance. Moreover, our results suggested that a dysfunction of resting-state functional language network plays an important role in the clinic diagnosis of schizophrenia.

  2. Unsupervised learning algorithms

    CERN Document Server

    Aydin, Kemal

    2016-01-01

    This book summarizes the state-of-the-art in unsupervised learning. The contributors discuss how with the proliferation of massive amounts of unlabeled data, unsupervised learning algorithms, which can automatically discover interesting and useful patterns in such data, have gained popularity among researchers and practitioners. The authors outline how these algorithms have found numerous applications including pattern recognition, market basket analysis, web mining, social network analysis, information retrieval, recommender systems, market research, intrusion detection, and fraud detection. They present how the difficulty of developing theoretically sound approaches that are amenable to objective evaluation have resulted in the proposal of numerous unsupervised learning algorithms over the past half-century. The intended audience includes researchers and practitioners who are increasingly using unsupervised learning algorithms to analyze their data. Topics of interest include anomaly detection, clustering,...

  3. Application of neural networks and other machine learning algorithms to DNA sequence analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lapedes, A.; Barnes, C.; Burks, C.; Farber, R.; Sirotkin, K.

    1988-01-01

    In this article we report initial, quantitative results on application of simple neutral networks, and simple machine learning methods, to two problems in DNA sequence analysis. The two problems we consider are: (1) determination of whether procaryotic and eucaryotic DNA sequences segments are translated to protein. An accuracy of 99.4% is reported for procaryotic DNA (E. coli) and 98.4% for eucaryotic DNA (H. Sapiens genes known to be expressed in liver); (2) determination of whether eucaryotic DNA sequence segments containing the dinucleotides ''AG'' or ''GT'' are transcribed to RNA splice junctions. Accuracy of 91.2% was achieved on intron/exon splice junctions (acceptor sites) and 92.8% on exon/intron splice junctions (donor sites). The solution of these two problems, by use of information processing algorithms operating on unannotated base sequences and without recourse to biological laboratory work, is relevant to the Human Genome Project. A variety of neural network, machine learning, and information theoretic algorithms are used. The accuracies obtained exceed those of previous investigations for which quantitative results are available in the literature. They result from an ongoing program of research that applies machine learning algorithms to the problem of determining biological function of DNA sequences. Some predictions of possible new genes using these methods are listed -- although a complete survey of the H. sapiens and E. coli sections of GenBank will be given elsewhere. 36 refs., 6 figs., 6 tabs.

  4. Universal perceptron and DNA-like learning algorithm for binary neural networks: LSBF and PBF implementations.

    Science.gov (United States)

    Chen, Fangyue; Chen, Guanrong Ron; He, Guolong; Xu, Xiubin; He, Qinbin

    2009-10-01

    Universal perceptron (UP), a generalization of Rosenblatt's perceptron, is considered in this paper, which is capable of implementing all Boolean functions (BFs). In the classification of BFs, there are: 1) linearly separable Boolean function (LSBF) class, 2) parity Boolean function (PBF) class, and 3) non-LSBF and non-PBF class. To implement these functions, UP takes different kinds of simple topological structures in which each contains at most one hidden layer along with the smallest possible number of hidden neurons. Inspired by the concept of DNA sequences in biological systems, a novel learning algorithm named DNA-like learning is developed, which is able to quickly train a network with any prescribed BF. The focus is on performing LSBF and PBF by a single-layer perceptron (SLP) with the new algorithm. Two criteria for LSBF and PBF are proposed, respectively, and a new measure for a BF, named nonlinearly separable degree (NLSD), is introduced. In the sense of this measure, the PBF is the most complex one. The new algorithm has many advantages including, in particular, fast running speed, good robustness, and no need of considering the convergence property. For example, the number of iterations and computations in implementing the basic 2-bit logic operations such as AND, OR, and XOR by using the new algorithm is far smaller than the ones needed by using other existing algorithms such as error-correction (EC) and backpropagation (BP) algorithms. Moreover, the synaptic weights and threshold values derived from UP can be directly used in designing of the template of cellular neural networks (CNNs), which has been considered as a new spatial-temporal sensory computing paradigm.

  5. Fuzzy-logic based Q-Learning interference management algorithms in two-tier networks

    Science.gov (United States)

    Xu, Qiang; Xu, Zezhong; Li, Li; Zheng, Yan

    2017-10-01

    Unloading from macrocell network and enhancing coverage can be realized by deploying femtocells in the indoor scenario. However, the system performance of the two-tier network could be impaired by the co-tier and cross-tier interference. In this paper, a distributed resource allocation scheme is studied when each femtocell base station is self-governed and the resource cannot be assigned centrally through the gateway. A novel Q-Learning interference management scheme is proposed, that is divided into cooperative and independent part. In the cooperative algorithm, the interference information is exchanged between the cell-edge users which are classified by the fuzzy logic in the same cell. Meanwhile, we allocate the orthogonal subchannels to the high-rate cell-edge users to disperse the interference power when the data rate requirement is satisfied. The resource is assigned directly according to the minimum power principle in the independent algorithm. Simulation results are provided to demonstrate the significant performance improvements in terms of the average data rate, interference power and energy efficiency over the cutting-edge resource allocation algorithms.

  6. An Energy-Efficient Spectrum-Aware Reinforcement Learning-Based Clustering Algorithm for Cognitive Radio Sensor Networks.

    Science.gov (United States)

    Mustapha, Ibrahim; Mohd Ali, Borhanuddin; Rasid, Mohd Fadlee A; Sali, Aduwati; Mohamad, Hafizal

    2015-08-13

    It is well-known that clustering partitions network into logical groups of nodes in order to achieve energy efficiency and to enhance dynamic channel access in cognitive radio through cooperative sensing. While the topic of energy efficiency has been well investigated in conventional wireless sensor networks, the latter has not been extensively explored. In this paper, we propose a reinforcement learning-based spectrum-aware clustering algorithm that allows a member node to learn the energy and cooperative sensing costs for neighboring clusters to achieve an optimal solution. Each member node selects an optimal cluster that satisfies pairwise constraints, minimizes network energy consumption and enhances channel sensing performance through an exploration technique. We first model the network energy consumption and then determine the optimal number of clusters for the network. The problem of selecting an optimal cluster is formulated as a Markov Decision Process (MDP) in the algorithm and the obtained simulation results show convergence, learning and adaptability of the algorithm to dynamic environment towards achieving an optimal solution. Performance comparisons of our algorithm with the Groupwise Spectrum Aware (GWSA)-based algorithm in terms of Sum of Square Error (SSE), complexity, network energy consumption and probability of detection indicate improved performance from the proposed approach. The results further reveal that an energy savings of 9% and a significant Primary User (PU) detection improvement can be achieved with the proposed approach.

  7. An Energy-Efficient Spectrum-Aware Reinforcement Learning-Based Clustering Algorithm for Cognitive Radio Sensor Networks

    Science.gov (United States)

    Mustapha, Ibrahim; Ali, Borhanuddin Mohd; Rasid, Mohd Fadlee A.; Sali, Aduwati; Mohamad, Hafizal

    2015-01-01

    It is well-known that clustering partitions network into logical groups of nodes in order to achieve energy efficiency and to enhance dynamic channel access in cognitive radio through cooperative sensing. While the topic of energy efficiency has been well investigated in conventional wireless sensor networks, the latter has not been extensively explored. In this paper, we propose a reinforcement learning-based spectrum-aware clustering algorithm that allows a member node to learn the energy and cooperative sensing costs for neighboring clusters to achieve an optimal solution. Each member node selects an optimal cluster that satisfies pairwise constraints, minimizes network energy consumption and enhances channel sensing performance through an exploration technique. We first model the network energy consumption and then determine the optimal number of clusters for the network. The problem of selecting an optimal cluster is formulated as a Markov Decision Process (MDP) in the algorithm and the obtained simulation results show convergence, learning and adaptability of the algorithm to dynamic environment towards achieving an optimal solution. Performance comparisons of our algorithm with the Groupwise Spectrum Aware (GWSA)-based algorithm in terms of Sum of Square Error (SSE), complexity, network energy consumption and probability of detection indicate improved performance from the proposed approach. The results further reveal that an energy savings of 9% and a significant Primary User (PU) detection improvement can be achieved with the proposed approach. PMID:26287191

  8. Application of reinforcement learning in cognitive radio networks: models and algorithms.

    Science.gov (United States)

    Yau, Kok-Lim Alvin; Poh, Geong-Sen; Chien, Su Fong; Al-Rawi, Hasan A A

    2014-01-01

    Cognitive radio (CR) enables unlicensed users to exploit the underutilized spectrum in licensed spectrum whilst minimizing interference to licensed users. Reinforcement learning (RL), which is an artificial intelligence approach, has been applied to enable each unlicensed user to observe and carry out optimal actions for performance enhancement in a wide range of schemes in CR, such as dynamic channel selection and channel sensing. This paper presents new discussions of RL in the context of CR networks. It provides an extensive review on how most schemes have been approached using the traditional and enhanced RL algorithms through state, action, and reward representations. Examples of the enhancements on RL, which do not appear in the traditional RL approach, are rules and cooperative learning. This paper also reviews performance enhancements brought about by the RL algorithms and open issues. This paper aims to establish a foundation in order to spark new research interests in this area. Our discussion has been presented in a tutorial manner so that it is comprehensive to readers outside the specialty of RL and CR.

  9. Application of Reinforcement Learning in Cognitive Radio Networks: Models and Algorithms

    Directory of Open Access Journals (Sweden)

    Kok-Lim Alvin Yau

    2014-01-01

    Full Text Available Cognitive radio (CR enables unlicensed users to exploit the underutilized spectrum in licensed spectrum whilst minimizing interference to licensed users. Reinforcement learning (RL, which is an artificial intelligence approach, has been applied to enable each unlicensed user to observe and carry out optimal actions for performance enhancement in a wide range of schemes in CR, such as dynamic channel selection and channel sensing. This paper presents new discussions of RL in the context of CR networks. It provides an extensive review on how most schemes have been approached using the traditional and enhanced RL algorithms through state, action, and reward representations. Examples of the enhancements on RL, which do not appear in the traditional RL approach, are rules and cooperative learning. This paper also reviews performance enhancements brought about by the RL algorithms and open issues. This paper aims to establish a foundation in order to spark new research interests in this area. Our discussion has been presented in a tutorial manner so that it is comprehensive to readers outside the specialty of RL and CR.

  10. Machine Learning an algorithmic perspective

    CERN Document Server

    Marsland, Stephen

    2009-01-01

    Traditional books on machine learning can be divided into two groups - those aimed at advanced undergraduates or early postgraduates with reasonable mathematical knowledge and those that are primers on how to code algorithms. The field is ready for a text that not only demonstrates how to use the algorithms that make up machine learning methods, but also provides the background needed to understand how and why these algorithms work. Machine Learning: An Algorithmic Perspective is that text.Theory Backed up by Practical ExamplesThe book covers neural networks, graphical models, reinforcement le

  11. Using neural networks and Dyna algorithm for integrated planning, reacting and learning in systems

    Science.gov (United States)

    Lima, Pedro; Beard, Randal

    1992-01-01

    The traditional AI answer to the decision making problem for a robot is planning. However, planning is usually CPU-time consuming, depending on the availability and accuracy of a world model. The Dyna system generally described in earlier work, uses trial and error to learn a world model which is simultaneously used to plan reactions resulting in optimal action sequences. It is an attempt to integrate planning, reactive, and learning systems. The architecture of Dyna is presented. The different blocks are described. There are three main components of the system. The first is the world model used by the robot for internal world representation. The input of the world model is the current state and the action taken in the current state. The output is the corresponding reward and resulting state. The second module in the system is the policy. The policy observes the current state and outputs the action to be executed by the robot. At the beginning of program execution, the policy is stochastic and through learning progressively becomes deterministic. The policy decides upon an action according to the output of an evaluation function, which is the third module of the system. The evaluation function takes the following as input: the current state of the system, the action taken in that state, the resulting state, and a reward generated by the world which is proportional to the current distance from the goal state. Originally, the work proposed was as follows: (1) to implement a simple 2-D world where a 'robot' is navigating around obstacles, to learn the path to a goal, by using lookup tables; (2) to substitute the world model and Q estimate function Q by neural networks; and (3) to apply the algorithm to a more complex world where the use of a neural network would be fully justified. In this paper, the system design and achieved results will be described. First we implement the world model with a neural network and leave Q implemented as a look up table. Next, we use a

  12. A Coral Reef Algorithm Based on Learning Automata for the Coverage Control Problem of Heterogeneous Directional Sensor Networks.

    Science.gov (United States)

    Li, Ming; Miao, Chunyan; Leung, Cyril

    2015-12-04

    Coverage control is one of the most fundamental issues in directional sensor networks. In this paper, the coverage optimization problem in a directional sensor network is formulated as a multi-objective optimization problem. It takes into account the coverage rate of the network, the number of working sensor nodes and the connectivity of the network. The coverage problem considered in this paper is characterized by the geographical irregularity of the sensed events and heterogeneity of the sensor nodes in terms of sensing radius, field of angle and communication radius. To solve this multi-objective problem, we introduce a learning automata-based coral reef algorithm for adaptive parameter selection and use a novel Tchebycheff decomposition method to decompose the multi-objective problem into a single-objective problem. Simulation results show the consistent superiority of the proposed algorithm over alternative approaches.

  13. A Q-Learning-Based Delay-Aware Routing Algorithm to Extend the Lifetime of Underwater Sensor Networks.

    Science.gov (United States)

    Jin, Zhigang; Ma, Yingying; Su, Yishan; Li, Shuo; Fu, Xiaomei

    2017-07-19

    Underwater sensor networks (UWSNs) have become a hot research topic because of their various aquatic applications. As the underwater sensor nodes are powered by built-in batteries which are difficult to replace, extending the network lifetime is a most urgent need. Due to the low and variable transmission speed of sound, the design of reliable routing algorithms for UWSNs is challenging. In this paper, we propose a Q-learning based delay-aware routing (QDAR) algorithm to extend the lifetime of underwater sensor networks. In QDAR, a data collection phase is designed to adapt to the dynamic environment. With the application of the Q-learning technique, QDAR can determine a global optimal next hop rather than a greedy one. We define an action-utility function in which residual energy and propagation delay are both considered for adequate routing decisions. Thus, the QDAR algorithm can extend the network lifetime by uniformly distributing the residual energy and provide lower end-to-end delay. The simulation results show that our protocol can yield nearly the same network lifetime, and can reduce the end-to-end delay by 20-25% compared with a classic lifetime-extended routing protocol (QELAR).

  14. A Q-Learning-Based Delay-Aware Routing Algorithm to Extend the Lifetime of Underwater Sensor Networks

    Directory of Open Access Journals (Sweden)

    Zhigang Jin

    2017-07-01

    Full Text Available Underwater sensor networks (UWSNs have become a hot research topic because of their various aquatic applications. As the underwater sensor nodes are powered by built-in batteries which are difficult to replace, extending the network lifetime is a most urgent need. Due to the low and variable transmission speed of sound, the design of reliable routing algorithms for UWSNs is challenging. In this paper, we propose a Q-learning based delay-aware routing (QDAR algorithm to extend the lifetime of underwater sensor networks. In QDAR, a data collection phase is designed to adapt to the dynamic environment. With the application of the Q-learning technique, QDAR can determine a global optimal next hop rather than a greedy one. We define an action-utility function in which residual energy and propagation delay are both considered for adequate routing decisions. Thus, the QDAR algorithm can extend the network lifetime by uniformly distributing the residual energy and provide lower end-to-end delay. The simulation results show that our protocol can yield nearly the same network lifetime, and can reduce the end-to-end delay by 20–25% compared with a classic lifetime-extended routing protocol (QELAR.

  15. Learning Lightness Algorithms

    Science.gov (United States)

    Hurlbert, Anya C.; Poggio, Tomaso A.

    1989-03-01

    Lightness algorithms, which recover surface reflectance from the image irradiance signal in individual color channels, provide one solution to the computational problem of color constancy. We compare three methods for constructing (or "learning") lightness algorithms from examples in a Mondrian world: optimal linear estimation, backpropagation (BP) on a two-layer network, and optimal polynomial estimation. In each example, the input data (image irradiance) is paired with the desired output (surface reflectance). Optimal linear estimation produces a lightness operator that is approximately equivalent to a center-surround, or bandpass, filter and which resembles a new lightness algorithm recently proposed by Land. This technique is based on the assumption that the operator that transforms input into output is linear, which is true for a certain class of early vision algorithms that may therefore be synthesized in a similar way from examples. Although the backpropagation net performs slightly better on new input data than the estimated linear operator, the optimal polynomial operator of order two performs marginally better than both.

  16. Learning from a carbon dioxide capture system dataset: Application of the piecewise neural network algorithm

    Directory of Open Access Journals (Sweden)

    Veronica Chan

    2017-03-01

    Full Text Available This paper presents the application of a neural network rule extraction algorithm, called the piece-wise linear artificial neural network or PWL-ANN algorithm, on a carbon capture process system dataset. The objective of the application is to enhance understanding of the intricate relationships among the key process parameters. The algorithm extracts rules in the form of multiple linear regression equations by approximating the sigmoid activation functions of the hidden neurons in an artificial neural network (ANN. The PWL-ANN algorithm overcomes the weaknesses of the statistical regression approach, in which accuracies of the generated predictive models are often not satisfactory, and the opaqueness of the ANN models. The results show that the generated PWL-ANN models have accuracies that are as high as the originally trained ANN models of the four datasets of the carbon capture process system. An analysis of the extracted rules and the magnitude of the coefficients in the equations revealed that the three most significant parameters of the CO2 production rate are the steam flow rate through reboiler, reboiler pressure, and the CO2 concentration in the flue gas.

  17. An empirical comparison of popular structure learning algorithms with a view to gene network inference

    Czech Academy of Sciences Publication Activity Database

    Djordjilović, V.; Chiogna, M.; Vomlel, Jiří

    2017-01-01

    Roč. 88, č. 1 (2017), s. 602-613 ISSN 0888-613X R&D Projects: GA ČR(CZ) GA16-12010S Institutional support: RVO:67985556 Keywords : Bayesian networks * Structure learning * Reverse engineering * Gene networks Subject RIV: JD - Computer Applications, Robotics Impact factor: 2.845, year: 2016 http:// library .utia.cas.cz/separaty/2017/MTR/vomlel-0477168.pdf

  18. Algorithms for Reinforcement Learning

    CERN Document Server

    Szepesvari, Csaba

    2010-01-01

    Reinforcement learning is a learning paradigm concerned with learning to control a system so as to maximize a numerical performance measure that expresses a long-term objective. What distinguishes reinforcement learning from supervised learning is that only partial feedback is given to the learner about the learner's predictions. Further, the predictions may have long term effects through influencing the future state of the controlled system. Thus, time plays a special role. The goal in reinforcement learning is to develop efficient learning algorithms, as well as to understand the algorithms'

  19. Network-Oblivious Algorithms

    DEFF Research Database (Denmark)

    Bilardi, Gianfranco; Pietracaprina, Andrea; Pucci, Geppino

    2016-01-01

    A framework is proposed for the design and analysis of network-oblivious algorithms, namely algorithms that can run unchanged, yet efficiently, on a variety of machines characterized by different degrees of parallelism and communication capabilities. The framework prescribes that a network...... in the latter model implies optimality in the decomposable bulk synchronous parallel model, which is known to effectively describe a wide and significant class of parallel platforms. The proposed framework can be regarded as an attempt to port the notion of obliviousness, well established in the context...

  20. Application of Reinforcement Learning Algorithms for the Adaptive Computation of the Smoothing Parameter for Probabilistic Neural Network.

    Science.gov (United States)

    Kusy, Maciej; Zajdel, Roman

    2015-09-01

    In this paper, we propose new methods for the choice and adaptation of the smoothing parameter of the probabilistic neural network (PNN). These methods are based on three reinforcement learning algorithms: Q(0)-learning, Q(λ)-learning, and stateless Q-learning. We regard three types of PNN classifiers: the model that uses single smoothing parameter for the whole network, the model that utilizes single smoothing parameter for each data attribute, and the model that possesses the matrix of smoothing parameters different for each data variable and data class. Reinforcement learning is applied as the method of finding such a value of the smoothing parameter, which ensures the maximization of the prediction ability. PNN models with smoothing parameters computed according to the proposed algorithms are tested on eight databases by calculating the test error with the use of the cross validation procedure. The results are compared with state-of-the-art methods for PNN training published in the literature up to date and, additionally, with PNN whose sigma is determined by means of the conjugate gradient approach. The results demonstrate that the proposed approaches can be used as alternative PNN training procedures.

  1. Learning Bayesian network structure using a cloud-based adaptive immune genetic algorithm

    Science.gov (United States)

    Song, Qin; Lin, Feng; Sun, Wei; Chang, KC

    2011-06-01

    A new BN structure learning method using a cloud-based adaptive immune genetic algorithm (CAIGA) is proposed. Since the probabilities of crossover and mutation in CAIGA are adaptively varied depending on X-conditional cloud generator, it could improve the diversity of the structure population and avoid local optimum. This is due to the stochastic nature and stable tendency of the cloud model. Moreover, offspring structure population is simplified by using immune theory to reduce its computational complexity. The experiment results reveal that this method can be effectively used for BN structure learning.

  2. Artificial Neural Network Approach in Laboratory Test Reporting:  Learning Algorithms.

    Science.gov (United States)

    Demirci, Ferhat; Akan, Pinar; Kume, Tuncay; Sisman, Ali Riza; Erbayraktar, Zubeyde; Sevinc, Suleyman

    2016-08-01

    In the field of laboratory medicine, minimizing errors and establishing standardization is only possible by predefined processes. The aim of this study was to build an experimental decision algorithm model open to improvement that would efficiently and rapidly evaluate the results of biochemical tests with critical values by evaluating multiple factors concurrently. The experimental model was built by Weka software (Weka, Waikato, New Zealand) based on the artificial neural network method. Data were received from Dokuz Eylül University Central Laboratory. "Training sets" were developed for our experimental model to teach the evaluation criteria. After training the system, "test sets" developed for different conditions were used to statistically assess the validity of the model. After developing the decision algorithm with three iterations of training, no result was verified that was refused by the laboratory specialist. The sensitivity of the model was 91% and specificity was 100%. The estimated κ score was 0.950. This is the first study based on an artificial neural network to build an experimental assessment and decision algorithm model. By integrating our trained algorithm model into a laboratory information system, it may be possible to reduce employees' workload without compromising patient safety. © American Society for Clinical Pathology, 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. Diagnosis of Malignancy in Thyroid Tumors by Multi-Layer Perceptron Neural Networks With Different Batch Learning Algorithms.

    Science.gov (United States)

    Pourahmad, Saeedeh; Azad, Mohsen; Paydar, Shahram

    2015-03-30

    To diagnose the malignancy in thyroid tumor, neural network approach is applied and the performances of thirteen batch learning algorithms are investigated on accuracy of the prediction. Therefore, a back propagation feed forward neural networks (BP FNNs) is designed and three different numbers of neuron in hidden layer are compared (5, 10 and 20 neurons). The pathology result after the surgery and clinical findings before surgery of the patients are used as the target outputs and the inputs, respectively. The best algorithm(s) is/are chosen based on mean or maximum accuracy values in the prediction and also area under Receiver Operating Characteristic Curve (ROC curve). The results show superiority of the network with 5 neurons in the hidden layer. In addition, the better performances are occurred for Polak-Ribiere conjugate gradient, BFGS quasi-newton and one step secant algorithms according to their accuracy percentage in prediction (83%) and for Scaled Conjugate Gradient and BFGS quasi-Newton based on their area under the ROC curve (0.905).

  4. Analyzing the evolutionary mechanisms of the Air Transportation System-of-Systems using network theory and machine learning algorithms

    Science.gov (United States)

    Kotegawa, Tatsuya

    Complexity in the Air Transportation System (ATS) arises from the intermingling of many independent physical resources, operational paradigms, and stakeholder interests, as well as the dynamic variation of these interactions over time. Currently, trade-offs and cost benefit analyses of new ATS concepts are carried out on system-wide evaluation simulations driven by air traffic forecasts that assume fixed airline routes. However, this does not well reflect reality as airlines regularly add and remove routes. A airline service route network evolution model that projects route addition and removal was created and combined with state-of-the-art air traffic forecast methods to better reflect the dynamic properties of the ATS in system-wide simulations. Guided by a system-of-systems framework, network theory metrics and machine learning algorithms were applied to develop the route network evolution models based on patterns extracted from historical data. Constructing the route addition section of the model posed the greatest challenge due to the large pool of new link candidates compared to the actual number of routes historically added to the network. Of the models explored, algorithms based on logistic regression, random forests, and support vector machines showed best route addition and removal forecast accuracies at approximately 20% and 40%, respectively, when validated with historical data. The combination of network evolution models and a system-wide evaluation tool quantified the impact of airline route network evolution on air traffic delay. The expected delay minutes when considering network evolution increased approximately 5% for a forecasted schedule on 3/19/2020. Performance trade-off studies between several airline route network topologies from the perspectives of passenger travel efficiency, fuel burn, and robustness were also conducted to provide bounds that could serve as targets for ATS transformation efforts. The series of analysis revealed that high

  5. A Scalable Neuro-inspired Robot Controller Integrating a Machine Learning Algorithm and a Spiking Cerebellar-like Network

    DEFF Research Database (Denmark)

    Baira Ojeda, Ismael; Tolu, Silvia; Lund, Henrik Hautop

    2017-01-01

    the Locally Weighted Projection Regression algorithm (LWPR) and a spiking cerebellar-like microcircuit. The LWPR guarantees both an optimized representation of the input space and the learning of the dynamic internal model (IM) of the robot. However, the cerebellar-like sub-circuit integrates LWPR input......Combining Fable robot, a modular robot, with a neuroinspired controller, we present the proof of principle of a system that can scale to several neurally controlled compliant modules. The motor control and learning of a robot module are carried out by a Unit Learning Machine (ULM) that embeds......-driven contributions to deliver accurate corrective commands to the global IM. This article extends the earlier work by including the Deep Cerebellar Nuclei (DCN) and by reproducing the Purkinje and the DCN layers using a spiking neural network (SNN) implemented on the neuromorphic SpiNNaker platform. The performance...

  6. Variances handling method of clinical pathways based on T-S fuzzy neural networks with novel hybrid learning algorithm.

    Science.gov (United States)

    Du, Gang; Jiang, Zhibin; Diao, Xiaodi; Ye, Yan; Yao, Yang

    2012-06-01

    Clinical pathways' variances present complex, fuzzy, uncertain and high-risk characteristics. They could cause complicating diseases or even endanger patients' life if not handled effectively. In order to improve the accuracy and efficiency of variances handling by Takagi-Sugeno (T-S) fuzzy neural networks (FNNs), a new variances handling method for clinical pathways (CPs) is proposed in this study, which is based on T-S FNNs with novel hybrid learning algorithm. And the optimal structure and parameters can be achieved simultaneously by integrating the random cooperative decomposing particle swarm optimization algorithm (RCDPSO) and discrete binary version of PSO (DPSO) algorithm. Finally, a case study on liver poisoning of osteosarcoma preoperative chemotherapy CP is used to validate the proposed method. The result demonstrates that T-S FNNs based on the proposed algorithm achieves superior performances in efficiency, precision, and generalization ability to standard T-S FNNs, Mamdani FNNs and T-S FNNs based on other algorithms (CPSO and PSO) for variances handling of CPs.

  7. LANN-SVD: A Non-Iterative SVD-Based Learning Algorithm for One-Layer Neural Networks.

    Science.gov (United States)

    Fontenla-Romero, Oscar; Perez-Sanchez, Beatriz; Guijarro-Berdinas, Bertha

    2017-09-01

    In the scope of data analytics, the volume of a data set can be defined as a product of instance size and dimensionality of the data. In many real problems, data sets are mainly large only on one of these aspects. Machine learning methods proposed in the literature are able to efficiently learn in only one of these two situations, when the number of variables is much greater than instances or vice versa. However, there is no proposal allowing to efficiently handle either circumstances in a large-scale scenario. In this brief, we present an approach to integrally address both situations, large dimensionality or large instance size, by using a singular value decomposition (SVD) within a learning algorithm for one-layer feedforward neural network. As a result, a noniterative solution is obtained, where the weights can be calculated in a closed-form manner, thereby avoiding low convergence rate and also hyperparameter tuning. The proposed learning method, LANN-SVD in short, presents a good computational efficiency for large-scale data analytic. Comprehensive comparisons were conducted to assess LANN-SVD against other state-of-the-art algorithms. The results of this brief exhibited the superior efficiency of the proposed method in any circumstance.

  8. Using an Extended Kalman Filter Learning Algorithm for Feed-Forward Neural Networks to Describe Tracer Correlations

    Science.gov (United States)

    Lary, David J.; Mussa, Yussuf

    2004-01-01

    In this study a new extended Kalman filter (EKF) learning algorithm for feed-forward neural networks (FFN) is used. With the EKF approach, the training of the FFN can be seen as state estimation for a non-linear stationary process. The EKF method gives excellent convergence performances provided that there is enough computer core memory and that the machine precision is high. Neural networks are ideally suited to describe the spatial and temporal dependence of tracer-tracer correlations. The neural network performs well even in regions where the correlations are less compact and normally a family of correlation curves would be required. For example, the CH4-N2O correlation can be well described using a neural network trained with the latitude, pressure, time of year, and CH4 volume mixing ratio (v.m.r.). The neural network was able to reproduce the CH4-N2O correlation with a correlation coefficient between simulated and training values of 0.9997. The neural network Fortran code used is available for download.

  9. A Constrained Multi-Objective Learning Algorithm for Feed-Forward Neural Network Classifiers

    Directory of Open Access Journals (Sweden)

    M. Njah

    2017-06-01

    Full Text Available This paper proposes a new approach to address the optimal design of a Feed-forward Neural Network (FNN based classifier. The originality of the proposed methodology, called CMOA, lie in the use of a new constraint handling technique based on a self-adaptive penalty procedure in order to direct the entire search effort towards finding only Pareto optimal solutions that are acceptable. Neurons and connections of the FNN Classifier are dynamically built during the learning process. The approach includes differential evolution to create new individuals and then keeps only the non-dominated ones as the basis for the next generation. The designed FNN Classifier is applied to six binary classification benchmark problems, obtained from the UCI repository, and results indicated the advantages of the proposed approach over other existing multi-objective evolutionary neural networks classifiers reported recently in the literature.

  10. Fast, Simple and Accurate Handwritten Digit Classification by Training Shallow Neural Network Classifiers with the 'Extreme Learning Machine' Algorithm.

    Directory of Open Access Journals (Sweden)

    Mark D McDonnell

    Full Text Available Recent advances in training deep (multi-layer architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates below 1% on the MNIST handwritten digit benchmark can be replicated with shallow non-convolutional neural networks. This is achieved by training such networks using the 'Extreme Learning Machine' (ELM approach, which also enables a very rapid training time (∼ 10 minutes. Adding distortions, as is common practise for MNIST, reduces error rates even further. Our methods are also shown to be capable of achieving less than 5.5% error rates on the NORB image database. To achieve these results, we introduce several enhancements to the standard ELM algorithm, which individually and in combination can significantly improve performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random 'receptive field' sampling of the input ensures the input weight matrix is sparse, with about 90% of weights equal to zero. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hidden-units required to achieve a particular performance. Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems.

  11. Fast, Simple and Accurate Handwritten Digit Classification by Training Shallow Neural Network Classifiers with the 'Extreme Learning Machine' Algorithm.

    Science.gov (United States)

    McDonnell, Mark D; Tissera, Migel D; Vladusich, Tony; van Schaik, André; Tapson, Jonathan

    2015-01-01

    Recent advances in training deep (multi-layer) architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates below 1% on the MNIST handwritten digit benchmark can be replicated with shallow non-convolutional neural networks. This is achieved by training such networks using the 'Extreme Learning Machine' (ELM) approach, which also enables a very rapid training time (∼ 10 minutes). Adding distortions, as is common practise for MNIST, reduces error rates even further. Our methods are also shown to be capable of achieving less than 5.5% error rates on the NORB image database. To achieve these results, we introduce several enhancements to the standard ELM algorithm, which individually and in combination can significantly improve performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random 'receptive field' sampling of the input ensures the input weight matrix is sparse, with about 90% of weights equal to zero. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hidden-units required to achieve a particular performance. Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems.

  12. A Scalable Weight-Free Learning Algorithm for Regulatory Control of Cell Activity in Spiking Neuronal Networks.

    Science.gov (United States)

    Zhang, Xu; Foderaro, Greg; Henriquez, Craig; Ferrari, Silvia

    2016-12-22

    Recent developments in neural stimulation and recording technologies are providing scientists with the ability of recording and controlling the activity of individual neurons in vitro or in vivo, with very high spatial and temporal resolution. Tools such as optogenetics, for example, are having a significant impact in the neuroscience field by delivering optical firing control with the precision and spatiotemporal resolution required for investigating information processing and plasticity in biological brains. While a number of training algorithms have been developed to date for spiking neural network (SNN) models of biological neuronal circuits, exiting methods rely on learning rules that adjust the synaptic strengths (or weights) directly, in order to obtain the desired network-level (or functional-level) performance. As such, they are not applicable to modifying plasticity in biological neuronal circuits, in which synaptic strengths only change as a result of pre- and post-synaptic neuron firings or biological mechanisms beyond our control. This paper presents a weight-free training algorithm that relies solely on adjusting the spatiotemporal delivery of neuron firings in order to optimize the network performance. The proposed weight-free algorithm does not require any knowledge of the SNN model or its plasticity mechanisms. As a result, this training approach is potentially realizable in vitro or in vivo via neural stimulation and recording technologies, such as optogenetics and multielectrode arrays, and could be utilized to control plasticity at multiple scales of biological neuronal circuits. The approach is demonstrated by training SNNs with hundreds of units to control a virtual insect navigating in an unknown environment.

  13. Complex networks an algorithmic perspective

    CERN Document Server

    Erciyes, Kayhan

    2014-01-01

    Network science is a rapidly emerging field of study that encompasses mathematics, computer science, physics, and engineering. A key issue in the study of complex networks is to understand the collective behavior of the various elements of these networks.Although the results from graph theory have proven to be powerful in investigating the structures of complex networks, few books focus on the algorithmic aspects of complex network analysis. Filling this need, Complex Networks: An Algorithmic Perspective supplies the basic theoretical algorithmic and graph theoretic knowledge needed by every r

  14. Multilayer Optical Learning Networks

    Science.gov (United States)

    Wagner, Kelvin; Psaltis, Demetri

    1987-08-01

    In this paper we present a new approach to learning in a multilayer optical neural network which is based on holographically interconnected nonlinear Fabry-Perot etalons. The network can learn the interconnections that form a distributed representation of a desired pattern transformation operation. The interconnections are formed in an adaptive and self aligning fashion, as volume holographic gratings in photorefractive crystals. Parallel arrays of globally space integrated inner products diffracted by the interconnecting hologram illuminate arrays of nonlinear Fabry-Perot etalons for fast thresholding of the transformed patterns. A phase conjugated reference wave interferes with a backwards propagating error signal to form holographic interference patterns which are time integrated in the volume of the photorefractive crystal in order to slowly modify and learn the appropriate self aligning interconnections. A holographic implementation of a single layer perceptron learning procedure is presented that can be extendept ,to a multilayer learning network through an optical implementation of the backward error propagation (BEP) algorithm.

  15. Principal component analysis networks and algorithms

    CERN Document Server

    Kong, Xiangyu; Duan, Zhansheng

    2017-01-01

    This book not only provides a comprehensive introduction to neural-based PCA methods in control science, but also presents many novel PCA algorithms and their extensions and generalizations, e.g., dual purpose, coupled PCA, GED, neural based SVD algorithms, etc. It also discusses in detail various analysis methods for the convergence, stabilizing, self-stabilizing property of algorithms, and introduces the deterministic discrete-time systems method to analyze the convergence of PCA/MCA algorithms. Readers should be familiar with numerical analysis and the fundamentals of statistics, such as the basics of least squares and stochastic algorithms. Although it focuses on neural networks, the book only presents their learning law, which is simply an iterative algorithm. Therefore, no a priori knowledge of neural networks is required. This book will be of interest and serve as a reference source to researchers and students in applied mathematics, statistics, engineering, and other related fields.

  16. Genetic algorithm for neural networks optimization

    Science.gov (United States)

    Setyawati, Bina R.; Creese, Robert C.; Sahirman, Sidharta

    2004-11-01

    This paper examines the forecasting performance of multi-layer feed forward neural networks in modeling a particular foreign exchange rates, i.e. Japanese Yen/US Dollar. The effects of two learning methods, Back Propagation and Genetic Algorithm, in which the neural network topology and other parameters fixed, were investigated. The early results indicate that the application of this hybrid system seems to be well suited for the forecasting of foreign exchange rates. The Neural Networks and Genetic Algorithm were programmed using MATLAB«.

  17. Gossip algorithms in quantum networks

    Science.gov (United States)

    Siomau, Michael

    2017-01-01

    Gossip algorithms is a common term to describe protocols for unreliable information dissemination in natural networks, which are not optimally designed for efficient communication between network entities. We consider application of gossip algorithms to quantum networks and show that any quantum network can be updated to optimal configuration with local operations and classical communication. This allows to speed-up - in the best case exponentially - the quantum information dissemination. Irrespective of the initial configuration of the quantum network, the update requiters at most polynomial number of local operations and classical communication.

  18. A method for classification of network traffic based on C5.0 Machine Learning Algorithm

    DEFF Research Database (Denmark)

    Bujlow, Tomasz; Riaz, M. Tahir; Pedersen, Jens Myrup

    2012-01-01

    and classification, an algorithm for recognizing flow direction and the C5.0 itself. Classified applications include Skype, FTP, torrent, web browser traffic, web radio, interactive gaming and SSH. We performed subsequent tries using different sets of parameters and both training and classification options...

  19. Gossip algorithms in quantum networks

    Energy Technology Data Exchange (ETDEWEB)

    Siomau, Michael, E-mail: siomau@nld.ds.mpg.de [Physics Department, Jazan University, P.O. Box 114, 45142 Jazan (Saudi Arabia); Network Dynamics, Max Planck Institute for Dynamics and Self-Organization (MPIDS), 37077 Göttingen (Germany)

    2017-01-23

    Gossip algorithms is a common term to describe protocols for unreliable information dissemination in natural networks, which are not optimally designed for efficient communication between network entities. We consider application of gossip algorithms to quantum networks and show that any quantum network can be updated to optimal configuration with local operations and classical communication. This allows to speed-up – in the best case exponentially – the quantum information dissemination. Irrespective of the initial configuration of the quantum network, the update requiters at most polynomial number of local operations and classical communication. - Highlights: • We analyze the performance of gossip algorithms in quantum networks. • Local operations and classical communication (LOCC) can speed the performance up. • The speed-up is exponential in the best case; the number of LOCC is polynomial.

  20. Learning Networks for Lifelong Learning

    OpenAIRE

    Sloep, Peter

    2008-01-01

    Presentation in a seminar organized by Christopher Hoadley at Penn State University, October 2004.Contains general introduction into the Learning Network Programme and a demonstration of the Netlogo Simulation of a Learning Network.

  1. Ensemble algorithms in reinforcement learning

    NARCIS (Netherlands)

    Wiering, Marco A; van Hasselt, Hado

    This paper describes several ensemble methods that combine multiple different reinforcement learning (RL) algorithms in a single agent. The aim is to enhance learning speed and final performance by combining the chosen actions or action probabilities of different RL algorithms. We designed and

  2. Learning interactive learning strategy with genetic algorithm

    OpenAIRE

    Hanzel, Jan

    2014-01-01

    The main goal of this thesis was to develop an algoritem for learning the best strategy in the case of interactive learning between a human and a robot. We presented the definition and formalization of a learning strategy. A learning strategy specifies the behaviour of a student and a teacher in a interactive learning process. We also presented a genetic algorithm to resolve our optimisation problem. We tryed to inpruve vectors which are used to present learning strategies. The vectors were...

  3. Neural networks and statistical learning

    CERN Document Server

    Du, Ke-Lin

    2014-01-01

    Providing a broad but in-depth introduction to neural network and machine learning in a statistical framework, this book provides a single, comprehensive resource for study and further research. All the major popular neural network models and statistical learning approaches are covered with examples and exercises in every chapter to develop a practical working understanding of the content. Each of the twenty-five chapters includes state-of-the-art descriptions and important research results on the respective topics. The broad coverage includes the multilayer perceptron, the Hopfield network, associative memory models, clustering models and algorithms, the radial basis function network, recurrent neural networks, principal component analysis, nonnegative matrix factorization, independent component analysis, discriminant analysis, support vector machines, kernel methods, reinforcement learning, probabilistic and Bayesian networks, data fusion and ensemble learning, fuzzy sets and logic, neurofuzzy models, hardw...

  4. Learning Processes of Layered Neural Networks

    OpenAIRE

    Fujiki, Sumiyoshi; FUJIKI, Nahomi, M.

    1995-01-01

    A positive reinforcement type learning algorithm is formulated for a stochastic feed-forward neural network, and a learning equation similar to that of the Boltzmann machine algorithm is obtained. By applying a mean field approximation to the same stochastic feed-forward neural network, a deterministic analog feed-forward network is obtained and the back-propagation learning rule is re-derived.

  5. Ensemble algorithms in reinforcement learning.

    Science.gov (United States)

    Wiering, Marco A; van Hasselt, Hado

    2008-08-01

    This paper describes several ensemble methods that combine multiple different reinforcement learning (RL) algorithms in a single agent. The aim is to enhance learning speed and final performance by combining the chosen actions or action probabilities of different RL algorithms. We designed and implemented four different ensemble methods combining the following five different RL algorithms: Q-learning, Sarsa, actor-critic (AC), QV-learning, and AC learning automaton. The intuitively designed ensemble methods, namely, majority voting (MV), rank voting, Boltzmann multiplication (BM), and Boltzmann addition, combine the policies derived from the value functions of the different RL algorithms, in contrast to previous work where ensemble methods have been used in RL for representing and learning a single value function. We show experiments on five maze problems of varying complexity; the first problem is simple, but the other four maze tasks are of a dynamic or partially observable nature. The results indicate that the BM and MV ensembles significantly outperform the single RL algorithms.

  6. Learning Networks for Lifelong Learning

    NARCIS (Netherlands)

    Koper, Rob

    2004-01-01

    Presentation in a seminar organized by Christopher Hoadley at Penn State University, October 2004.Contains general introduction into the Learning Network Programme and a demonstration of the Netlogo Simulation of a Learning Network.

  7. Empirical tests of the Gradual Learning Algorithm

    NARCIS (Netherlands)

    Boersma, P.; Hayes, B.

    1999-01-01

    The Gradual Learning Algorithm (Boersma 1997) is a constraint ranking algorithm for learning Optimality-theoretic grammars. The purpose of this article is to assess the capabilities of the Gradual Learning Algorithm, particularly in comparison with the Constraint Demotion algorithm of Tesar and

  8. Empirical tests of the Gradual Learning Algorithm

    NARCIS (Netherlands)

    Boersma, P.; Hayes, B.

    2001-01-01

    The Gradual Learning Algorithm (Boersma 1997) is a constraint-ranking algorithm for learning optimality-theoretic grammars. The purpose of this article is to assess the capabilities of the Gradual Learning Algorithm, particularly in comparison with the Constraint Demotion algorithm of Tesar and

  9. A New Optimized GA-RBF Neural Network Algorithm

    Directory of Open Access Journals (Sweden)

    Weikuan Jia

    2014-01-01

    Full Text Available When confronting the complex problems, radial basis function (RBF neural network has the advantages of adaptive and self-learning ability, but it is difficult to determine the number of hidden layer neurons, and the weights learning ability from hidden layer to the output layer is low; these deficiencies easily lead to decreasing learning ability and recognition precision. Aiming at this problem, we propose a new optimized RBF neural network algorithm based on genetic algorithm (GA-RBF algorithm, which uses genetic algorithm to optimize the weights and structure of RBF neural network; it chooses new ways of hybrid encoding and optimizing simultaneously. Using the binary encoding encodes the number of the hidden layer’s neurons and using real encoding encodes the connection weights. Hidden layer neurons number and connection weights are optimized simultaneously in the new algorithm. However, the connection weights optimization is not complete; we need to use least mean square (LMS algorithm for further leaning, and finally get a new algorithm model. Using two UCI standard data sets to test the new algorithm, the results show that the new algorithm improves the operating efficiency in dealing with complex problems and also improves the recognition precision, which proves that the new algorithm is valid.

  10. A new optimized GA-RBF neural network algorithm.

    Science.gov (United States)

    Jia, Weikuan; Zhao, Dean; Shen, Tian; Su, Chunyang; Hu, Chanli; Zhao, Yuyan

    2014-01-01

    When confronting the complex problems, radial basis function (RBF) neural network has the advantages of adaptive and self-learning ability, but it is difficult to determine the number of hidden layer neurons, and the weights learning ability from hidden layer to the output layer is low; these deficiencies easily lead to decreasing learning ability and recognition precision. Aiming at this problem, we propose a new optimized RBF neural network algorithm based on genetic algorithm (GA-RBF algorithm), which uses genetic algorithm to optimize the weights and structure of RBF neural network; it chooses new ways of hybrid encoding and optimizing simultaneously. Using the binary encoding encodes the number of the hidden layer's neurons and using real encoding encodes the connection weights. Hidden layer neurons number and connection weights are optimized simultaneously in the new algorithm. However, the connection weights optimization is not complete; we need to use least mean square (LMS) algorithm for further leaning, and finally get a new algorithm model. Using two UCI standard data sets to test the new algorithm, the results show that the new algorithm improves the operating efficiency in dealing with complex problems and also improves the recognition precision, which proves that the new algorithm is valid.

  11. On stochastic approximation algorithms for classes of PAC learning problems

    Energy Technology Data Exchange (ETDEWEB)

    Rao, N.S.V.; Uppuluri, V.R.R.; Oblow, E.M.

    1994-03-01

    The classical stochastic approximation methods are shown to yield algorithms to solve several formulations of the PAC learning problem defined on the domain [o,1]{sup d}. Under some assumptions on different ability of the probability measure functions, simple algorithms to solve some PAC learning problems are proposed based on networks of non-polynomial units (e.g. artificial neural networks). Conditions on the sizes of these samples required to ensure the error bounds are derived using martingale inequalities.

  12. Novel quantum inspired binary neural network algorithm

    Indian Academy of Sciences (India)

    In this paper, a quantum based binary neural network algorithm is proposed, named as novel quantum binary neural network algorithm (NQ-BNN). It forms a neural network structure by deciding weights and separability parameter in quantum based manner. Quantum computing concept represents solution probabilistically ...

  13. Consensus-based sparse signal reconstruction algorithm for wireless sensor networks

    National Research Council Canada - National Science Library

    Peng, Bao; Zhao, Zhi; Han, Guangjie; Shen, Jian

    2016-01-01

    This article presents a distributed Bayesian reconstruction algorithm for wireless sensor networks to reconstruct the sparse signals based on variational sparse Bayesian learning and consensus filter...

  14. Scalable Virtual Network Mapping Algorithm for Internet-Scale Networks

    Science.gov (United States)

    Yang, Qiang; Wu, Chunming; Zhang, Min

    The proper allocation of network resources from a common physical substrate to a set of virtual networks (VNs) is one of the key technical challenges of network virtualization. While a variety of state-of-the-art algorithms have been proposed in an attempt to address this issue from different facets, the challenge still remains in the context of large-scale networks as the existing solutions mainly perform in a centralized manner which requires maintaining the overall and up-to-date information of the underlying substrate network. This implies the restricted scalability and computational efficiency when the network scale becomes large. This paper tackles the virtual network mapping problem and proposes a novel hierarchical algorithm in conjunction with a substrate network decomposition approach. By appropriately transforming the underlying substrate network into a collection of sub-networks, the hierarchical virtual network mapping algorithm can be carried out through a global virtual network mapping algorithm (GVNMA) and a local virtual network mapping algorithm (LVNMA) operated in the network central server and within individual sub-networks respectively with their cooperation and coordination as necessary. The proposed algorithm is assessed against the centralized approaches through a set of numerical simulation experiments for a range of network scenarios. The results show that the proposed hierarchical approach can be about 5-20 times faster for VN mapping tasks than conventional centralized approaches with acceptable communication overhead between GVNCA and LVNCA for all examined networks, whilst performs almost as well as the centralized solutions.

  15. Evolutionary algorithms for mobile ad hoc networks

    CERN Document Server

    Dorronsoro, Bernabé; Danoy, Grégoire; Pigné, Yoann; Bouvry, Pascal

    2014-01-01

    Describes how evolutionary algorithms (EAs) can be used to identify, model, and minimize day-to-day problems that arise for researchers in optimization and mobile networking. Mobile ad hoc networks (MANETs), vehicular networks (VANETs), sensor networks (SNs), and hybrid networks—each of these require a designer’s keen sense and knowledge of evolutionary algorithms in order to help with the common issues that plague professionals involved in optimization and mobile networking. This book introduces readers to both mobile ad hoc networks and evolutionary algorithms, presenting basic concepts as well as detailed descriptions of each. It demonstrates how metaheuristics and evolutionary algorithms (EAs) can be used to help provide low-cost operations in the optimization process—allowing designers to put some “intelligence” or sophistication into the design. It also offers efficient and accurate information on dissemination algorithms topology management, and mobility models to address challenges in the ...

  16. Algorithms for radio networks with dynamic topology

    Science.gov (United States)

    Shacham, Nachum; Ogier, Richard; Rutenburg, Vladislav V.; Garcia-Luna-Aceves, Jose

    1991-08-01

    The objective of this project was the development of advanced algorithms and protocols that efficiently use network resources to provide optimal or nearly optimal performance in future communication networks with highly dynamic topologies and subject to frequent link failures. As reflected by this report, we have achieved our objective and have significantly advanced the state-of-the-art in this area. The research topics of the papers summarized include the following: efficient distributed algorithms for computing shortest pairs of disjoint paths; minimum-expected-delay alternate routing algorithms for highly dynamic unreliable networks; algorithms for loop-free routing; multipoint communication by hierarchically encoded data; efficient algorithms for extracting the maximum information from event-driven topology updates; methods for the neural network solution of link scheduling and other difficult problems arising in communication networks; and methods for robust routing in networks subject to sophisticated attacks.

  17. Learning conditional Gaussian networks

    DEFF Research Database (Denmark)

    Bøttcher, Susanne Gammelgaard

    This paper considers conditional Gaussian networks. The parameters in the network are learned by using conjugate Bayesian analysis. As conjugate local priors, we apply the Dirichlet distribution for discrete variables and the Gaussian-inverse gamma distribution for continuous variables, given...... a configuration of the discrete parents. We assume parameter independence and complete data. Further, to learn the structure of the network, the network score is deduced. We then develop a local master prior procedure, for deriving parameter priors in these networks. This procedure satisfies parameter...... independence, parameter modularity and likelihood equivalence. Bayes factors to be used in model search are introduced. Finally the methods derived are illustrated by a simple example....

  18. SelfieBoost: A Boosting Algorithm for Deep Learning

    OpenAIRE

    Shalev-Shwartz, Shai

    2014-01-01

    We describe and analyze a new boosting algorithm for deep learning called SelfieBoost. Unlike other boosting algorithms, like AdaBoost, which construct ensembles of classifiers, SelfieBoost boosts the accuracy of a single network. We prove a $\\log(1/\\epsilon)$ convergence rate for SelfieBoost under some "SGD success" assumption which seems to hold in practice.

  19. Learning chaotic attractors by neural networks

    NARCIS (Netherlands)

    Bakker, R; Schouten, JC; Giles, CL; Takens, F; van den Bleek, CM

    2000-01-01

    An algorithm is introduced that trains a neural network to identify chaotic dynamics from a single measured time series. During training, the algorithm learns to short-term predict the time series. At the same time a criterion, developed by Diks, van Zwet, Takens, and de Goede (1996) is monitored

  20. Classifying epilepsy diseases using artificial neural networks and genetic algorithm.

    Science.gov (United States)

    Koçer, Sabri; Canal, M Rahmi

    2011-08-01

    In this study, FFT analysis is applied to the EEG signals of the normal and patient subjects and the obtained FFT coefficients are used as inputs in Artificial Neural Network (ANN). The differences shown by the non-stationary random signals such as EEG signals in cases of health and sickness (epilepsy) were evaluated and tried to be analyzed under computer-supported conditions by using artificial neural networks. Multi-Layer Perceptron (MLP) architecture is used Levenberg-Marquardt (LM), Quickprop (QP), Delta-bar delta (DBD), Momentum and Conjugate gradient (CG) learning algorithms, and the best performance was tried to be attained by ensuring the optimization with the use of genetic algorithms of the weights, learning rates, neuron numbers of hidden layer in the training process. This study shows that the artificial neural network increases the classification performance using genetic algorithm.

  1. Learning Networks for Lifelong Learning

    NARCIS (Netherlands)

    Koper, Rob

    2004-01-01

    Presentation at: "Learning Designs in a Networked World A Dutch - Canada Education Seminar", October 15th, 2004, University of Alberta, Edmonton, Canada. Similar presentation as: http://hdl.handle.net/1820/278

  2. Real-Coded Quantum-Inspired Genetic Algorithm-Based BP Neural Network Algorithm

    Directory of Open Access Journals (Sweden)

    Jianyong Liu

    2015-01-01

    Full Text Available The method that the real-coded quantum-inspired genetic algorithm (RQGA used to optimize the weights and threshold of BP neural network is proposed to overcome the defect that the gradient descent method makes the algorithm easily fall into local optimal value in the learning process. Quantum genetic algorithm (QGA is with good directional global optimization ability, but the conventional QGA is based on binary coding; the speed of calculation is reduced by the coding and decoding processes. So, RQGA is introduced to explore the search space, and the improved varied learning rate is adopted to train the BP neural network. Simulation test shows that the proposed algorithm is effective to rapidly converge to the solution conformed to constraint conditions.

  3. Algorithms and networking for computer games

    CERN Document Server

    Smed, Jouni

    2006-01-01

    Algorithms and Networking for Computer Games is an essential guide to solving the algorithmic and networking problems of modern commercial computer games, written from the perspective of a computer scientist. Combining algorithmic knowledge and game-related problems, the authors discuss all the common difficulties encountered in game programming. The first part of the book tackles algorithmic problems by presenting how they can be solved practically. As well as ""classical"" topics such as random numbers, tournaments and game trees, the authors focus on how to find a path in, create the terrai

  4. Training Feedforward Neural Networks Using Symbiotic Organisms Search Algorithm.

    Science.gov (United States)

    Wu, Haizhou; Zhou, Yongquan; Luo, Qifang; Basset, Mohamed Abdel

    2016-01-01

    Symbiotic organisms search (SOS) is a new robust and powerful metaheuristic algorithm, which stimulates the symbiotic interaction strategies adopted by organisms to survive and propagate in the ecosystem. In the supervised learning area, it is a challenging task to present a satisfactory and efficient training algorithm for feedforward neural networks (FNNs). In this paper, SOS is employed as a new method for training FNNs. To investigate the performance of the aforementioned method, eight different datasets selected from the UCI machine learning repository are employed for experiment and the results are compared among seven metaheuristic algorithms. The results show that SOS performs better than other algorithms for training FNNs in terms of converging speed. It is also proven that an FNN trained by the method of SOS has better accuracy than most algorithms compared.

  5. Genetic Algorithms in Wireless Networking: Techniques, Applications, and Issues

    OpenAIRE

    Mehboob, Usama; Qadir, Junaid; Ali, Salman; Vasilakos, Athanasios

    2014-01-01

    In recent times, wireless access technology is becoming increasingly commonplace due to the ease of operation and installation of untethered wireless media. The design of wireless networking is challenging due to the highly dynamic environmental condition that makes parameter optimization a complex task. Due to the dynamic, and often unknown, operating conditions, modern wireless networking standards increasingly rely on machine learning and artificial intelligence algorithms. Genetic algorit...

  6. Genetic Algorithm Optimized Neural Networks Ensemble as ...

    African Journals Online (AJOL)

    NJD

    Genetic Algorithm Optimized Neural Networks Ensemble as. Calibration Model for Simultaneous Spectrophotometric. Estimation of Atenolol and Losartan Potassium in Tablets. Dondeti Satyanarayana*, Kamarajan Kannan and Rajappan Manavalan. Department of Pharmacy, Annamalai University, Annamalainagar, Tamil ...

  7. Neural networks art: solving problems with multiple solutions and new teaching algorithm.

    Science.gov (United States)

    Dmitrienko, V D; Zakovorotnyi, A Yu; Leonov, S Yu; Khavina, I P

    2014-01-01

    A new discrete neural networks adaptive resonance theory (ART), which allows solving problems with multiple solutions, is developed. New algorithms neural networks teaching ART to prevent degradation and reproduction classes at training noisy input data is developed. Proposed learning algorithms discrete ART networks, allowing obtaining different classification methods of input.

  8. Flow enforcement algorithms for ATM networks

    DEFF Research Database (Denmark)

    Dittmann, Lars; Jacobsen, Søren B.; Moth, Klaus

    1991-01-01

    Four measurement algorithms for flow enforcement in asynchronous transfer mode (ATM) networks are presented. The algorithms are the leaky bucket, the rectangular sliding window, the triangular sliding window, and the exponentially weighted moving average. A comparison, based partly on teletraffic....... Implementations are proposed on the block diagram level, and dimensioning examples are carried out when flow enforcing a renewal-type connection using the four algorithms. The corresponding hardware demands are estimated aid compared......Four measurement algorithms for flow enforcement in asynchronous transfer mode (ATM) networks are presented. The algorithms are the leaky bucket, the rectangular sliding window, the triangular sliding window, and the exponentially weighted moving average. A comparison, based partly on teletraffic...... theory and partly on signal processing theory, is carried out. It is seen that the time constant involved increases with the increasing burstiness of the connection. It is suggested that the RMS measurement bandwidth be used to dimension linear algorithms for equal flow enforcement characteristics...

  9. GEOLOGICAL MAPPING USING MACHINE LEARNING ALGORITHMS

    National Research Council Canada - National Science Library

    A. S. Harvey; G. Fotopoulos

    2016-01-01

    .... Using these data with Machine Learning Algorithms (MLA), which are widely used in image analysis and statistical pattern recognition applications, may enhance preliminary geological mapping and interpretation...

  10. Learning Bayesian networks for discrete data

    KAUST Repository

    Liang, Faming

    2009-02-01

    Bayesian networks have received much attention in the recent literature. In this article, we propose an approach to learn Bayesian networks using the stochastic approximation Monte Carlo (SAMC) algorithm. Our approach has two nice features. Firstly, it possesses the self-adjusting mechanism and thus avoids essentially the local-trap problem suffered by conventional MCMC simulation-based approaches in learning Bayesian networks. Secondly, it falls into the class of dynamic importance sampling algorithms; the network features can be inferred by dynamically weighted averaging the samples generated in the learning process, and the resulting estimates can have much lower variation than the single model-based estimates. The numerical results indicate that our approach can mix much faster over the space of Bayesian networks than the conventional MCMC simulation-based approaches. © 2008 Elsevier B.V. All rights reserved.

  11. Optimization of deep learning algorithms for object classification

    Science.gov (United States)

    Horváth, András.

    2017-02-01

    Deep learning is currently the state of the art algorithm for image classification. The complexity of these feedforward neural networks have overcome a critical point, resulting algorithmic breakthroughs in various fields. On the other hand their complexity makes them executable in tasks, where High-throughput computing powers are available. The optimization of these networks -considering computational complexity and applicability on embedded systems- has not yet been studied and investigated in details. In this paper I show some examples how this algorithms can be optimized and accelerated on embedded systems.

  12. Stochastic Online Learning in Dynamic Networks under Unknown Models

    Science.gov (United States)

    2016-08-02

    Stochastic Online Learning in Dynamic Networks under Unknown Models This research aims to develop fundamental theories and practical algorithms for...12211 Research Triangle Park, NC 27709-2211 Online learning , multi-armed bandit, dynamic networks REPORT DOCUMENTATION PAGE 11. SPONSOR/MONITOR’S... Online Learning in Dynamic Networks under Unknown Models Report Title This research aims to develop fundamental theories and practical algorithms for

  13. Hyperbolic Gradient Operator and Hyperbolic Back-Propagation Learning Algorithms.

    Science.gov (United States)

    Nitta, Tohru; Kuroe, Yasuaki

    2017-03-23

    In this paper, we first extend the Wirtinger derivative which is defined for complex functions to hyperbolic functions, and derive the hyperbolic gradient operator yielding the steepest descent direction by using it. Next, we derive the hyperbolic backpropagation learning algorithms for some multilayered hyperbolic neural networks (NNs) using the hyperbolic gradient operator. It is shown that the use of the Wirtinger derivative reduces the effort necessary for the derivation of the learning algorithms by half, simplifies the representation of the learning algorithms, and makes their computer programs easier to code. In addition, we discuss the differences between the derived Hyperbolic-BP rules and the complex-valued backpropagation learning rule (Complex-BP). Finally, we make some experiments with the derived learning algorithms. As a result, we find that the convergence rates of the Hyperbolic-BP learning algorithms are high even if the fully activation functions are used, and discover that the Hyperbolic-BP learning algorithm for the hyperbolic NN with the split-type hyperbolic activation function has an ability to learn hyperbolic rotation as its inherent property.

  14. A Multidomain Survivable Virtual Network Mapping Algorithm

    Directory of Open Access Journals (Sweden)

    Xiancui Xiao

    2017-01-01

    Full Text Available Although the existing networks are more often deployed in the multidomain environment, most of existing researches focus on single-domain networks and there are no appropriate solutions for the multidomain virtual network mapping problem. In fact, most studies assume that the underlying network can operate without any interruption. However, physical networks cannot ensure the normal provision of network services for external reasons and traditional single-domain networks have difficulties to meet user needs, especially for the high security requirements of the network transmission. In order to solve the above problems, this paper proposes a survivable virtual network mapping algorithm (IntD-GRC-SVNE that implements multidomain mapping in network virtualization. IntD-GRC-SVNE maps the virtual communication networks onto different domain networks and provides backup resources for virtual links which improve the survivability of the special networks. Simulation results show that IntD-GRC-SVNE can not only improve the survivability of multidomain communications network but also render the network load more balanced and greatly improve the network acceptance rate due to employment of GRC (global resource capacity.

  15. Vectorized algorithms for spiking neural network simulation.

    Science.gov (United States)

    Brette, Romain; Goodman, Dan F M

    2011-06-01

    High-level languages (Matlab, Python) are popular in neuroscience because they are flexible and accelerate development. However, for simulating spiking neural networks, the cost of interpretation is a bottleneck. We describe a set of algorithms to simulate large spiking neural networks efficiently with high-level languages using vector-based operations. These algorithms constitute the core of Brian, a spiking neural network simulator written in the Python language. Vectorized simulation makes it possible to combine the flexibility of high-level languages with the computational efficiency usually associated with compiled languages.

  16. Learning Intelligent Genetic Algorithms Using Japanese Nonograms

    Science.gov (United States)

    Tsai, Jinn-Tsong; Chou, Ping-Yi; Fang, Jia-Cen

    2012-01-01

    An intelligent genetic algorithm (IGA) is proposed to solve Japanese nonograms and is used as a method in a university course to learn evolutionary algorithms. The IGA combines the global exploration capabilities of a canonical genetic algorithm (CGA) with effective condensed encoding, improved fitness function, and modified crossover and…

  17. Algorithmization in Learning and Instruction.

    Science.gov (United States)

    Landa, L. N.

    An introduction to the theory of algorithms reviews the theoretical issues of teaching algorithms, the logical and psychological problems of devising algorithms of identification, and the selection of efficient algorithms; and then relates all of these to the classroom teaching process. It also descirbes some major research on the effectiveness of…

  18. A Vehicle Detection Algorithm Based on Deep Belief Network

    Directory of Open Access Journals (Sweden)

    Hai Wang

    2014-01-01

    Full Text Available Vision based vehicle detection is a critical technology that plays an important role in not only vehicle active safety but also road video surveillance application. Traditional shallow model based vehicle detection algorithm still cannot meet the requirement of accurate vehicle detection in these applications. In this work, a novel deep learning based vehicle detection algorithm with 2D deep belief network (2D-DBN is proposed. In the algorithm, the proposed 2D-DBN architecture uses second-order planes instead of first-order vector as input and uses bilinear projection for retaining discriminative information so as to determine the size of the deep architecture which enhances the success rate of vehicle detection. On-road experimental results demonstrate that the algorithm performs better than state-of-the-art vehicle detection algorithm in testing data sets.

  19. CN: a consensus algorithm for inferring gene regulatory networks using the SORDER algorithm and conditional mutual information test.

    Science.gov (United States)

    Aghdam, Rosa; Ganjali, Mojtaba; Zhang, Xiujun; Eslahchi, Changiz

    2015-03-01

    Inferring Gene Regulatory Networks (GRNs) from gene expression data is a major challenge in systems biology. The Path Consistency (PC) algorithm is one of the popular methods in this field. However, as an order dependent algorithm, PC algorithm is not robust because it achieves different network topologies if gene orders are permuted. In addition, the performance of this algorithm depends on the threshold value used for independence tests. Consequently, selecting suitable sequential ordering of nodes and an appropriate threshold value for the inputs of PC algorithm are challenges to infer a good GRN. In this work, we propose a heuristic algorithm, namely SORDER, to find a suitable sequential ordering of nodes. Based on the SORDER algorithm and a suitable interval threshold for Conditional Mutual Information (CMI) tests, a network inference method, namely the Consensus Network (CN), has been developed. In the proposed method, for each edge of the complete graph, a weighted value is defined. This value is considered as the reliability value of dependency between two nodes. The final inferred network, obtained using the CN algorithm, contains edges with a reliability value of dependency of more than a defined threshold. The effectiveness of this method is benchmarked through several networks from the DREAM challenge and the widely used SOS DNA repair network in Escherichia coli. The results indicate that the CN algorithm is suitable for learning GRNs and it considerably improves the precision of network inference. The source of data sets and codes are available at .

  20. Cognitive Radio Transceivers: RF, Spectrum Sensing, and Learning Algorithms Review

    Directory of Open Access Journals (Sweden)

    Lise Safatly

    2014-01-01

    reconfigurable radio frequency (RF parts, enhanced spectrum sensing algorithms, and sophisticated machine learning techniques. In this paper, we present a review of the recent advances in CR transceivers hardware design and algorithms. For the RF part, three types of antennas are presented: UWB antennas, frequency-reconfigurable/tunable antennas, and UWB antennas with reconfigurable band notches. The main challenges faced by the design of the other RF blocks are also discussed. Sophisticated spectrum sensing algorithms that overcome main sensing challenges such as model uncertainty, hardware impairments, and wideband sensing are highlighted. The cognitive engine features are discussed. Moreover, we study unsupervised classification algorithms and a reinforcement learning (RL algorithm that has been proposed to perform decision-making in CR networks.

  1. Learning theory of distributed spectral algorithms

    Science.gov (United States)

    Guo, Zheng-Chu; Lin, Shao-Bo; Zhou, Ding-Xuan

    2017-07-01

    Spectral algorithms have been widely used and studied in learning theory and inverse problems. This paper is concerned with distributed spectral algorithms, for handling big data, based on a divide-and-conquer approach. We present a learning theory for these distributed kernel-based learning algorithms in a regression framework including nice error bounds and optimal minimax learning rates achieved by means of a novel integral operator approach and a second order decomposition of inverse operators. Our quantitative estimates are given in terms of regularity of the regression function, effective dimension of the reproducing kernel Hilbert space, and qualification of the filter function of the spectral algorithm. They do not need any eigenfunction or noise conditions and are better than the existing results even for the classical family of spectral algorithms.

  2. Learning Latent Structure in Complex Networks

    DEFF Research Database (Denmark)

    Mørup, Morten; Hansen, Lars Kai

    Latent structure in complex networks, e.g., in the form of community structure, can help understand network dynamics, identify heterogeneities in network properties, and predict ‘missing’ links. While most community detection algorithms are based on optimizing heuristic clustering objectives...... prediction performance of the learning based approaches and other widely used link prediction approaches in 14 networks ranging from medium size to large networks with more than a million nodes. While link prediction is typically well above chance for all networks, we find that the learning based mixed...... membership stochastic block model of Airoldi et al., performs well and often best in our experiments. The added complexity of the LD model improves link predictions for four of the 14 networks....

  3. Rethinking the learning of belief network probabilities

    Energy Technology Data Exchange (ETDEWEB)

    Musick, R.

    1996-03-01

    Belief networks are a powerful tool for knowledge discovery that provide concise, understandable probabilistic models of data. There are methods grounded in probability theory to incrementally update the relationships described by the belief network when new information is seen, to perform complex inferences over any set of variables in the data, to incorporate domain expertise and prior knowledge into the model, and to automatically learn the model from data. This paper concentrates on part of the belief network induction problem, that of learning the quantitative structure (the conditional probabilities), given the qualitative structure. In particular, the current practice of rote learning the probabilities in belief networks can be significantly improved upon. We advance the idea of applying any learning algorithm to the task of conditional probability learning in belief networks, discuss potential benefits, and show results of applying neural networks and other algorithms to a medium sized car insurance belief network. The results demonstrate from 10 to 100% improvements in model error rates over the current approaches.

  4. AdaBoost-based algorithm for network intrusion detection.

    Science.gov (United States)

    Hu, Weiming; Hu, Wei; Maybank, Steve

    2008-04-01

    Network intrusion detection aims at distinguishing the attacks on the Internet from normal use of the Internet. It is an indispensable part of the information security system. Due to the variety of network behaviors and the rapid development of attack fashions, it is necessary to develop fast machine-learning-based intrusion detection algorithms with high detection rates and low false-alarm rates. In this correspondence, we propose an intrusion detection algorithm based on the AdaBoost algorithm. In the algorithm, decision stumps are used as weak classifiers. The decision rules are provided for both categorical and continuous features. By combining the weak classifiers for continuous features and the weak classifiers for categorical features into a strong classifier, the relations between these two different types of features are handled naturally, without any forced conversions between continuous and categorical features. Adaptable initial weights and a simple strategy for avoiding overfitting are adopted to improve the performance of the algorithm. Experimental results show that our algorithm has low computational complexity and error rates, as compared with algorithms of higher computational complexity, as tested on the benchmark sample data.

  5. Kernel learning algorithms for face recognition

    CERN Document Server

    Li, Jun-Bao; Pan, Jeng-Shyang

    2013-01-01

    Kernel Learning Algorithms for Face Recognition covers the framework of kernel based face recognition. This book discusses the advanced kernel learning algorithms and its application on face recognition. This book also focuses on the theoretical deviation, the system framework and experiments involving kernel based face recognition. Included within are algorithms of kernel based face recognition, and also the feasibility of the kernel based face recognition method. This book provides researchers in pattern recognition and machine learning area with advanced face recognition methods and its new

  6. Research, Boundaries, and Policy in Networked Learning

    DEFF Research Database (Denmark)

    This book presents cutting-edge, peer reviewed research on networked learning organized by three themes: policy in networked learning, researching networked learning, and boundaries in networked learning. The "policy in networked learning" section explores networked learning in relation to policy...

  7. Routing algorithms in networks-on-chip

    CERN Document Server

    Daneshtalab, Masoud

    2014-01-01

    This book provides a single-source reference to routing algorithms for Networks-on-Chip (NoCs), as well as in-depth discussions of advanced solutions applied to current and next generation, many core NoC-based Systems-on-Chip (SoCs). After a basic introduction to the NoC design paradigm and architectures, routing algorithms for NoC architectures are presented and discussed at all abstraction levels, from the algorithmic level to actual implementation.  Coverage emphasizes the role played by the routing algorithm and is organized around key problems affecting current and next generation, many-core SoCs. A selection of routing algorithms is included, specifically designed to address key issues faced by designers in the ultra-deep sub-micron (UDSM) era, including performance improvement, power, energy, and thermal issues, fault tolerance and reliability.   ·         Provides a comprehensive overview of routing algorithms for Networks-on-Chip and NoC-based, manycore systems; ·         Describe...

  8. Learning Networks for Professional Development & Lifelong Learning

    NARCIS (Netherlands)

    Sloep, Peter

    2009-01-01

    Sloep, P. B. (2009). Learning Networks for Professional Development & Lifelong Learning. Presentation at a NeLLL seminar with Etienne Wenger held at the Open Universiteit Nederland. September, 10, 2009, Heerlen, The Netherlands.

  9. Algorithmic Case Pedagogy, Learning and Gender

    Science.gov (United States)

    Bromley, Robert; Huang, Zhenyu

    2015-01-01

    Great investment has been made in developing algorithmically-based cases within online homework management systems. This has been done because publishers are convinced that textbook adoption decisions are influenced by the incorporation of these systems within their products. These algorithmic assignments are thought to promote learning while…

  10. Parallelization of TMVA Machine Learning Algorithms

    CERN Document Server

    Hajili, Mammad

    2017-01-01

    This report reflects my work on Parallelization of TMVA Machine Learning Algorithms integrated to ROOT Data Analysis Framework during summer internship at CERN. The report consists of 4 impor- tant part - data set used in training and validation, algorithms that multiprocessing applied on them, parallelization techniques and re- sults of execution time changes due to number of workers.

  11. Analysis of convergence performance of neural networks ranking algorithm.

    Science.gov (United States)

    Zhang, Yongquan; Cao, Feilong

    2012-10-01

    The ranking problem is to learn a real-valued function which gives rise to a ranking over an instance space, which has gained much attention in machine learning in recent years. This article gives analysis of the convergence performance of neural networks ranking algorithm by means of the given samples and approximation property of neural networks. The upper bounds of convergence rate provided by our results can be considerably tight and independent of the dimension of input space when the target function satisfies some smooth condition. The obtained results imply that neural networks are able to adapt to ranking function in the instance space. Hence the obtained results are able to circumvent the curse of dimensionality on some smooth condition. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.

  12. A Transfer Learning Approach for Network Modeling

    Science.gov (United States)

    Huang, Shuai; Li, Jing; Chen, Kewei; Wu, Teresa; Ye, Jieping; Wu, Xia; Yao, Li

    2012-01-01

    Networks models have been widely used in many domains to characterize the interacting relationship between physical entities. A typical problem faced is to identify the networks of multiple related tasks that share some similarities. In this case, a transfer learning approach that can leverage the knowledge gained during the modeling of one task to help better model another task is highly desirable. In this paper, we propose a transfer learning approach, which adopts a Bayesian hierarchical model framework to characterize task relatedness and additionally uses the L1-regularization to ensure robust learning of the networks with limited sample sizes. A method based on the Expectation-Maximization (EM) algorithm is further developed to learn the networks from data. Simulation studies are performed, which demonstrate the superiority of the proposed transfer learning approach over single task learning that learns the network of each task in isolation. The proposed approach is also applied to identification of brain connectivity networks of Alzheimer’s disease (AD) from functional magnetic resonance image (fMRI) data. The findings are consistent with the AD literature. PMID:24526804

  13. Learning to forecast: Genetic algorithms and experiments

    NARCIS (Netherlands)

    Makarewicz, T.A.

    2014-01-01

    The central question that this thesis addresses is how economic agents learn to form price expectations, which are a crucial element of macroeconomic and financial models. The thesis applies a Genetic Algorithms model of learning to previous laboratory experiments, explaining the observed

  14. Algorithmic learning in a random world

    CERN Document Server

    Vovk, Vladimir; Shafer, Glenn

    2005-01-01

    A new scientific monograph developing significant new algorithmic foundations in machine learning theory. Researchers and postgraduates in CS, statistics, and A.I. will find the book an authoritative and formal presentation of some of the most promising theoretical developments in machine learning.

  15. A Learning Algorithm for Multimodal Grammar Inference.

    Science.gov (United States)

    D'Ulizia, A; Ferri, F; Grifoni, P

    2011-12-01

    The high costs of development and maintenance of multimodal grammars in integrating and understanding input in multimodal interfaces lead to the investigation of novel algorithmic solutions in automating grammar generation and in updating processes. Many algorithms for context-free grammar inference have been developed in the natural language processing literature. An extension of these algorithms toward the inference of multimodal grammars is necessary for multimodal input processing. In this paper, we propose a novel grammar inference mechanism that allows us to learn a multimodal grammar from its positive samples of multimodal sentences. The algorithm first generates the multimodal grammar that is able to parse the positive samples of sentences and, afterward, makes use of two learning operators and the minimum description length metrics in improving the grammar description and in avoiding the over-generalization problem. The experimental results highlight the acceptable performances of the algorithm proposed in this paper since it has a very high probability of parsing valid sentences.

  16. Network-based recommendation algorithms: A review

    Science.gov (United States)

    Yu, Fei; Zeng, An; Gillard, Sébastien; Medo, Matúš

    2016-06-01

    Recommender systems are a vital tool that helps us to overcome the information overload problem. They are being used by most e-commerce web sites and attract the interest of a broad scientific community. A recommender system uses data on users' past preferences to choose new items that might be appreciated by a given individual user. While many approaches to recommendation exist, the approach based on a network representation of the input data has gained considerable attention in the past. We review here a broad range of network-based recommendation algorithms and for the first time compare their performance on three distinct real datasets. We present recommendation topics that go beyond the mere question of which algorithm to use-such as the possible influence of recommendation on the evolution of systems that use it-and finally discuss open research directions and challenges.

  17. An Efficient Supervised Training Algorithm for Multilayer Spiking Neural Networks.

    Science.gov (United States)

    Xie, Xiurui; Qu, Hong; Liu, Guisong; Zhang, Malu; Kurths, Jürgen

    2016-01-01

    The spiking neural networks (SNNs) are the third generation of neural networks and perform remarkably well in cognitive tasks such as pattern recognition. The spike emitting and information processing mechanisms found in biological cognitive systems motivate the application of the hierarchical structure and temporal encoding mechanism in spiking neural networks, which have exhibited strong computational capability. However, the hierarchical structure and temporal encoding approach require neurons to process information serially in space and time respectively, which reduce the training efficiency significantly. For training the hierarchical SNNs, most existing methods are based on the traditional back-propagation algorithm, inheriting its drawbacks of the gradient diffusion and the sensitivity on parameters. To keep the powerful computation capability of the hierarchical structure and temporal encoding mechanism, but to overcome the low efficiency of the existing algorithms, a new training algorithm, the Normalized Spiking Error Back Propagation (NSEBP) is proposed in this paper. In the feedforward calculation, the output spike times are calculated by solving the quadratic function in the spike response model instead of detecting postsynaptic voltage states at all time points in traditional algorithms. Besides, in the feedback weight modification, the computational error is propagated to previous layers by the presynaptic spike jitter instead of the gradient decent rule, which realizes the layer-wised training. Furthermore, our algorithm investigates the mathematical relation between the weight variation and voltage error change, which makes the normalization in the weight modification applicable. Adopting these strategies, our algorithm outperforms the traditional SNN multi-layer algorithms in terms of learning efficiency and parameter sensitivity, that are also demonstrated by the comprehensive experimental results in this paper.

  18. Imbalanced learning foundations, algorithms, and applications

    CERN Document Server

    He, Haibo

    2013-01-01

    The first book of its kind to review the current status and future direction of the exciting new branch of machine learning/data mining called imbalanced learning Imbalanced learning focuses on how an intelligent system can learn when it is provided with imbalanced data. Solving imbalanced learning problems is critical in numerous data-intensive networked systems, including surveillance, security, Internet, finance, biomedical, defense, and more. Due to the inherent complex characteristics of imbalanced data sets, learning from such data requires new understandings, principles,

  19. Algorithm For A Self-Growing Neural Network

    Science.gov (United States)

    Cios, Krzysztof J.

    1996-01-01

    CID3 algorithm simulates self-growing neural network. Constructs decision trees equivalent to hidden layers of neural network. Based on ID3 algorithm, which dynamically generates decision tree while minimizing entropy of information. CID3 algorithm generates feedforward neural network by use of either crisp or fuzzy measure of entropy.

  20. A modified backpropagation learning algorithm with added emotional coefficients.

    Science.gov (United States)

    Khashman, Adnan

    2008-11-01

    Much of the research work into artificial intelligence (AI) has been focusing on exploring various potential applications of intelligent systems with successful results in most cases. In our attempts to model human intelligence by mimicking the brain structure and function, we overlook an important aspect in human learning and decision making: the emotional factor. While it currently sounds impossible to have "machines with emotions," it is quite conceivable to artificially simulate some emotions in machine learning. This paper presents a modified backpropagation (BP) learning algorithm, namely, the emotional backpropagation (EmBP) learning algorithm. The new algorithm has additional emotional weights that are updated using two additional emotional parameters: anxiety and confidence. The proposed "emotional" neural network will be implemented to a facial recognition problem, and the results will be compared to a similar application using a conventional neural network. Experimental results show that the addition of the two novel emotional parameters improves the performance of the neural network yielding higher recognition rates and faster recognition time.

  1. Learning algorithms for human-machine interfaces.

    Science.gov (United States)

    Danziger, Zachary; Fishbach, Alon; Mussa-Ivaldi, Ferdinando A

    2009-05-01

    The goal of this study is to create and examine machine learning algorithms that adapt in a controlled and cadenced way to foster a harmonious learning environment between the user and the controlled device. To evaluate these algorithms, we have developed a simple experimental framework. Subjects wear an instrumented data glove that records finger motions. The high-dimensional glove signals remotely control the joint angles of a simulated planar two-link arm on a computer screen, which is used to acquire targets. A machine learning algorithm was applied to adaptively change the transformation between finger motion and the simulated robot arm. This algorithm was either LMS gradient descent or the Moore-Penrose (MP) pseudoinverse transformation. Both algorithms modified the glove-to-joint angle map so as to reduce the endpoint errors measured in past performance. The MP group performed worse than the control group (subjects not exposed to any machine learning), while the LMS group outperformed the control subjects. However, the LMS subjects failed to achieve better generalization than the control subjects, and after extensive training converged to the same level of performance as the control subjects. These results highlight the limitations of coadaptive learning using only endpoint error reduction.

  2. Learning Algorithms for Human–Machine Interfaces

    Science.gov (United States)

    Fishbach, Alon; Mussa-Ivaldi, Ferdinando A.

    2012-01-01

    The goal of this study is to create and examine machine learning algorithms that adapt in a controlled and cadenced way to foster a harmonious learning environment between the user and the controlled device. To evaluate these algorithms, we have developed a simple experimental framework. Subjects wear an instrumented data glove that records finger motions. The high-dimensional glove signals remotely control the joint angles of a simulated planar two-link arm on a computer screen, which is used to acquire targets. A machine learning algorithm was applied to adaptively change the transformation between finger motion and the simulated robot arm. This algorithm was either LMS gradient descent or the Moore–Penrose (MP) pseudoinverse transformation. Both algorithms modified the glove-to-joint angle map so as to reduce the endpoint errors measured in past performance. The MP group performed worse than the control group (subjects not exposed to any machine learning), while the LMS group outperformed the control subjects. However, the LMS subjects failed to achieve better generalization than the control subjects, and after extensive training converged to the same level of performance as the control subjects. These results highlight the limitations of coadaptive learning using only endpoint error reduction. PMID:19203886

  3. Non-divergence of stochastic discrete time algorithms for PCA neural networks.

    Science.gov (United States)

    Lv, Jian Cheng; Yi, Zhang; Li, Yunxia

    2015-02-01

    Learning algorithms play an important role in the practical application of neural networks based on principal component analysis, often determining the success, or otherwise, of these applications. These algorithms cannot be divergent, but it is very difficult to directly study their convergence properties, because they are described by stochastic discrete time (SDT) algorithms. This brief analyzes the original SDT algorithms directly, and derives some invariant sets that guarantee the nondivergence of these algorithms in a stochastic environment by selecting proper learning parameters. Our theoretical results are verified by a series of simulation examples.

  4. Four Machine Learning Algorithms for Biometrics Fusion: A Comparative Study

    Directory of Open Access Journals (Sweden)

    I. G. Damousis

    2012-01-01

    Full Text Available We examine the efficiency of four machine learning algorithms for the fusion of several biometrics modalities to create a multimodal biometrics security system. The algorithms examined are Gaussian Mixture Models (GMMs, Artificial Neural Networks (ANNs, Fuzzy Expert Systems (FESs, and Support Vector Machines (SVMs. The fusion of biometrics leads to security systems that exhibit higher recognition rates and lower false alarms compared to unimodal biometric security systems. Supervised learning was carried out using a number of patterns from a well-known benchmark biometrics database, and the validation/testing took place with patterns from the same database which were not included in the training dataset. The comparison of the algorithms reveals that the biometrics fusion system is superior to the original unimodal systems and also other fusion schemes found in the literature.

  5. Distributed Extreme Learning Machine for Nonlinear Learning over Network

    Directory of Open Access Journals (Sweden)

    Songyan Huang

    2015-02-01

    Full Text Available Distributed data collection and analysis over a network are ubiquitous, especially over a wireless sensor network (WSN. To our knowledge, the data model used in most of the distributed algorithms is linear. However, in real applications, the linearity of systems is not always guaranteed. In nonlinear cases, the single hidden layer feedforward neural network (SLFN with radial basis function (RBF hidden neurons has the ability to approximate any continuous functions and, thus, may be used as the nonlinear learning system. However, confined by the communication cost, using the distributed version of the conventional algorithms to train the neural network directly is usually prohibited. Fortunately, based on the theorems provided in the extreme learning machine (ELM literature, we only need to compute the output weights of the SLFN. Computing the output weights itself is a linear learning problem, although the input-output mapping of the overall SLFN is still nonlinear. Using the distributed algorithmto cooperatively compute the output weights of the SLFN, we obtain a distributed extreme learning machine (dELM for nonlinear learning in this paper. This dELM is applied to the regression problem and classification problem to demonstrate its effectiveness and advantages.

  6. Localization Algorithms of Underwater Wireless Sensor Networks: A Survey

    Directory of Open Access Journals (Sweden)

    Yongjun Xu

    2012-02-01

    Full Text Available In Underwater Wireless Sensor Networks (UWSNs, localization is one of most important technologies since it plays a critical role in many applications. Motivated by widespread adoption of localization, in this paper, we present a comprehensive survey of localization algorithms. First, we classify localization algorithms into three categories based on sensor nodes’ mobility: stationary localization algorithms, mobile localization algorithms and hybrid localization algorithms. Moreover, we compare the localization algorithms in detail and analyze future research directions of localization algorithms in UWSNs.

  7. Localization algorithms of Underwater Wireless Sensor Networks: a survey.

    Science.gov (United States)

    Han, Guangjie; Jiang, Jinfang; Shu, Lei; Xu, Yongjun; Wang, Feng

    2012-01-01

    In Underwater Wireless Sensor Networks (UWSNs), localization is one of most important technologies since it plays a critical role in many applications. Motivated by widespread adoption of localization, in this paper, we present a comprehensive survey of localization algorithms. First, we classify localization algorithms into three categories based on sensor nodes' mobility: stationary localization algorithms, mobile localization algorithms and hybrid localization algorithms. Moreover, we compare the localization algorithms in detail and analyze future research directions of localization algorithms in UWSNs.

  8. An Improved Harmony Search Algorithm for Power Distribution Network Planning

    Directory of Open Access Journals (Sweden)

    Wei Sun

    2015-01-01

    Full Text Available Distribution network planning because of involving many variables and constraints is a multiobjective, discrete, nonlinear, and large-scale optimization problem. Harmony search (HS algorithm is a metaheuristic algorithm inspired by the improvisation process of music players. HS algorithm has several impressive advantages, such as easy implementation, less adjustable parameters, and quick convergence. But HS algorithm still has some defects such as premature convergence and slow convergence speed. According to the defects of the standard algorithm and characteristics of distribution network planning, an improved harmony search (IHS algorithm is proposed in this paper. We set up a mathematical model of distribution network structure planning, whose optimal objective function is to get the minimum annual cost and constraint conditions are overload and radial network. IHS algorithm is applied to solve the complex optimization mathematical model. The empirical results strongly indicate that IHS algorithm can effectively provide better results for solving the distribution network planning problem compared to other optimization algorithms.

  9. Extreme learning machines 2013 algorithms and applications

    CERN Document Server

    Toh, Kar-Ann; Romay, Manuel; Mao, Kezhi

    2014-01-01

    In recent years, ELM has emerged as a revolutionary technique of computational intelligence, and has attracted considerable attentions. An extreme learning machine (ELM) is a single layer feed-forward neural network alike learning system, whose connections from the input layer to the hidden layer are randomly generated, while the connections from the hidden layer to the output layer are learned through linear learning methods. The outstanding merits of extreme learning machine (ELM) are its fast learning speed, trivial human intervene and high scalability.   This book contains some selected papers from the International Conference on Extreme Learning Machine 2013, which was held in Beijing China, October 15-17, 2013. This conference aims to bring together the researchers and practitioners of extreme learning machine from a variety of fields including artificial intelligence, biomedical engineering and bioinformatics, system modelling and control, and signal and image processing, to promote research and discu...

  10. Evolutionary Algorithms For Neural Networks Binary And Real Data Classification

    Directory of Open Access Journals (Sweden)

    Dr. Hanan A.R. Akkar

    2015-08-01

    Full Text Available Artificial neural networks are complex networks emulating the way human rational neurons process data. They have been widely used generally in prediction clustering classification and association. The training algorithms that used to determine the network weights are almost the most important factor that influence the neural networks performance. Recently many meta-heuristic and Evolutionary algorithms are employed to optimize neural networks weights to achieve better neural performance. This paper aims to use recently proposed algorithms for optimizing neural networks weights comparing these algorithms performance with other classical meta-heuristic algorithms used for the same purpose. However to evaluate the performance of such algorithms for training neural networks we examine such algorithms to classify four opposite binary XOR clusters and classification of continuous real data sets such as Iris and Ecoli.

  11. Paradigms for Realizing Machine Learning Algorithms.

    Science.gov (United States)

    Agneeswaran, Vijay Srinivas; Tonpay, Pranay; Tiwary, Jayati

    2013-12-01

    The article explains the three generations of machine learning algorithms-with all three trying to operate on big data. The first generation tools are SAS, SPSS, etc., while second generation realizations include Mahout and RapidMiner (that work over Hadoop), and the third generation paradigms include Spark and GraphLab, among others. The essence of the article is that for a number of machine learning algorithms, it is important to look beyond the Hadoop's Map-Reduce paradigm in order to make them work on big data. A number of promising contenders have emerged in the third generation that can be exploited to realize deep analytics on big data.

  12. Advanced Machine learning Algorithm Application for Rotating Machine Health Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Kanemoto, Shigeru; Watanabe, Masaya [The University of Aizu, Aizuwakamatsu (Japan); Yusa, Noritaka [Tohoku University, Sendai (Japan)

    2014-08-15

    The present paper tries to evaluate the applicability of conventional sound analysis techniques and modern machine learning algorithms to rotating machine health monitoring. These techniques include support vector machine, deep leaning neural network, etc. The inner ring defect and misalignment anomaly sound data measured by a rotating machine mockup test facility are used to verify the above various kinds of algorithms. Although we cannot find remarkable difference of anomaly discrimination performance, some methods give us the very interesting eigen patterns corresponding to normal and abnormal states. These results will be useful for future more sensitive and robust anomaly monitoring technology.

  13. Comparison of machine learning algorithms for detecting coral reef

    Directory of Open Access Journals (Sweden)

    Eduardo Tusa

    2014-09-01

    Full Text Available (Received: 2014/07/31 - Accepted: 2014/09/23This work focuses on developing a fast coral reef detector, which is used for an autonomous underwater vehicle, AUV. A fast detection secures the AUV stabilization respect to an area of reef as fast as possible, and prevents devastating collisions. We use the algorithm of Purser et al. (2009 because of its precision. This detector has two parts: feature extraction that uses Gabor Wavelet filters, and feature classification that uses machine learning based on Neural Networks. Due to the extensive time of the Neural Networks, we exchange for a classification algorithm based on Decision Trees. We use a database of 621 images of coral reef in Belize (110 images for training and 511 images for testing. We implement the bank of Gabor Wavelets filters using C++ and the OpenCV library. We compare the accuracy and running time of 9 machine learning algorithms, whose result was the selection of the Decision Trees algorithm. Our coral detector performs 70ms of running time in comparison to 22s executed by the algorithm of Purser et al. (2009.

  14. The QV Family Compared to Other Reinforcement Learning Algorithms

    NARCIS (Netherlands)

    Wiering, Marco A.; van Hasselt, Hado

    2009-01-01

    This paper describes several new online model-free reinforcement learning (RL) algorithms. We designed three new reinforcement algorithms, namely: QV2, QVMAX, and QV-MAX2, that are all based on the QV-learning algorithm, but in contrary to QV-learning, QVMAX and QVMAX2 are off-policy RL algorithms

  15. Redes de aprendizaje, aprendizaje en red Learning Networks, Networked Learning

    Directory of Open Access Journals (Sweden)

    Peter Sloep

    2011-10-01

    Full Text Available Las redes de aprendizaje (Learning Networks son redes sociales en línea mediante las cuales los participantes comparten información y colaboran para crear conocimiento. De esta manera, estas redes enriquecen la experiencia de aprendizaje en cualquier contexto de aprendizaje, ya sea de educación formal (en escuelas o universidades o educación no-formal (formación profesional. Aunque el concepto de aprendizaje en red suscita el interés de diferentes actores del ámbito educativo, aún existen muchos interrogantes sobre cómo debe diseñarse el aprendizaje en red para facilitar adecuadamente la educación y la formación. El artículo toma este interrogante como punto de partida, y posteriormente aborda cuestiones como la dinámica de la evolución de las redes de aprendizaje, la importancia de fomentar la confianza entre los participantes y el papel central que desempeña el perfil de usuario en la construcción de la confianza, así como el apoyo entre compañeros. Además, se elabora el proceso de diseño de una red de aprendizaje, y se describe un ejemplo en el contexto universitario. Basándonos en la investigación que actualmente se lleva a cabo en nuestro propio centro y en otros lugares, el capítulo concluye con una visión del futuro de las redes de aprendizaje.Learning Networks are on-line social networks through which users share knowledge with each other and jointly develop new knowledge. This way, Learning Networks may enrich the experience of formal, school-based learning and form a viable setting for professional development. Although networked learning enjoys an increasing interest, many questions remain on how exactly learning in such networked contexts can contribute to successful education and training. Put differently, how should networked learning be designed best to facilitate education and training? Taking this as its point of departure, the chapter addresses such issues as the dynamic evolution of Learning Networks

  16. Computer aided lung cancer diagnosis with deep learning algorithms

    Science.gov (United States)

    Sun, Wenqing; Zheng, Bin; Qian, Wei

    2016-03-01

    Deep learning is considered as a popular and powerful method in pattern recognition and classification. However, there are not many deep structured applications used in medical imaging diagnosis area, because large dataset is not always available for medical images. In this study we tested the feasibility of using deep learning algorithms for lung cancer diagnosis with the cases from Lung Image Database Consortium (LIDC) database. The nodules on each computed tomography (CT) slice were segmented according to marks provided by the radiologists. After down sampling and rotating we acquired 174412 samples with 52 by 52 pixel each and the corresponding truth files. Three deep learning algorithms were designed and implemented, including Convolutional Neural Network (CNN), Deep Belief Networks (DBNs), Stacked Denoising Autoencoder (SDAE). To compare the performance of deep learning algorithms with traditional computer aided diagnosis (CADx) system, we designed a scheme with 28 image features and support vector machine. The accuracies of CNN, DBNs, and SDAE are 0.7976, 0.8119, and 0.7929, respectively; the accuracy of our designed traditional CADx is 0.7940, which is slightly lower than CNN and DBNs. We also noticed that the mislabeled nodules using DBNs are 4% larger than using traditional CADx, this might be resulting from down sampling process lost some size information of the nodules.

  17. Designing Artificial Neural Networks Using Particle Swarm Optimization Algorithms.

    Science.gov (United States)

    Garro, Beatriz A; Vázquez, Roberto A

    2015-01-01

    Artificial Neural Network (ANN) design is a complex task because its performance depends on the architecture, the selected transfer function, and the learning algorithm used to train the set of synaptic weights. In this paper we present a methodology that automatically designs an ANN using particle swarm optimization algorithms such as Basic Particle Swarm Optimization (PSO), Second Generation of Particle Swarm Optimization (SGPSO), and a New Model of PSO called NMPSO. The aim of these algorithms is to evolve, at the same time, the three principal components of an ANN: the set of synaptic weights, the connections or architecture, and the transfer functions for each neuron. Eight different fitness functions were proposed to evaluate the fitness of each solution and find the best design. These functions are based on the mean square error (MSE) and the classification error (CER) and implement a strategy to avoid overtraining and to reduce the number of connections in the ANN. In addition, the ANN designed with the proposed methodology is compared with those designed manually using the well-known Back-Propagation and Levenberg-Marquardt Learning Algorithms. Finally, the accuracy of the method is tested with different nonlinear pattern classification problems.

  18. Machine Learning for ATLAS DDM Network Metrics

    CERN Document Server

    Lassnig, Mario; The ATLAS collaboration; Vamosi, Ralf

    2016-01-01

    The increasing volume of physics data is posing a critical challenge to the ATLAS experiment. In anticipation of high luminosity physics, automation of everyday data management tasks has become necessary. Previously many of these tasks required human decision-making and operation. Recent advances in hardware and software have made it possible to entrust more complicated duties to automated systems using models trained by machine learning algorithms. In this contribution we show results from our ongoing automation efforts. First, we describe our framework for distributed data management and network metrics, automatically extract and aggregate data, train models with various machine learning algorithms, and eventually score the resulting models and parameters. Second, we use these models to forecast metrics relevant for network-aware job scheduling and data brokering. We show the characteristics of the data and evaluate the forecasting accuracy of our models.

  19. Exploration Of Deep Learning Algorithms Using Openacc Parallel Programming Model

    KAUST Repository

    Hamam, Alwaleed A.

    2017-03-13

    Deep learning is based on a set of algorithms that attempt to model high level abstractions in data. Specifically, RBM is a deep learning algorithm that used in the project to increase it\\'s time performance using some efficient parallel implementation by OpenACC tool with best possible optimizations on RBM to harness the massively parallel power of NVIDIA GPUs. GPUs development in the last few years has contributed to growing the concept of deep learning. OpenACC is a directive based ap-proach for computing where directives provide compiler hints to accelerate code. The traditional Restricted Boltzmann Ma-chine is a stochastic neural network that essentially perform a binary version of factor analysis. RBM is a useful neural net-work basis for larger modern deep learning model, such as Deep Belief Network. RBM parameters are estimated using an efficient training method that called Contrastive Divergence. Parallel implementation of RBM is available using different models such as OpenMP, and CUDA. But this project has been the first attempt to apply OpenACC model on RBM.

  20. Supervised Learning with Complex-valued Neural Networks

    CERN Document Server

    Suresh, Sundaram; Savitha, Ramasamy

    2013-01-01

    Recent advancements in the field of telecommunications, medical imaging and signal processing deal with signals that are inherently time varying, nonlinear and complex-valued. The time varying, nonlinear characteristics of these signals can be effectively analyzed using artificial neural networks.  Furthermore, to efficiently preserve the physical characteristics of these complex-valued signals, it is important to develop complex-valued neural networks and derive their learning algorithms to represent these signals at every step of the learning process. This monograph comprises a collection of new supervised learning algorithms along with novel architectures for complex-valued neural networks. The concepts of meta-cognition equipped with a self-regulated learning have been known to be the best human learning strategy. In this monograph, the principles of meta-cognition have been introduced for complex-valued neural networks in both the batch and sequential learning modes. For applications where the computati...

  1. SA-SOM algorithm for detecting communities in complex networks

    Science.gov (United States)

    Chen, Luogeng; Wang, Yanran; Huang, Xiaoming; Hu, Mengyu; Hu, Fang

    2017-10-01

    Currently, community detection is a hot topic. This paper, based on the self-organizing map (SOM) algorithm, introduced the idea of self-adaptation (SA) that the number of communities can be identified automatically, a novel algorithm SA-SOM of detecting communities in complex networks is proposed. Several representative real-world networks and a set of computer-generated networks by LFR-benchmark are utilized to verify the accuracy and the efficiency of this algorithm. The experimental findings demonstrate that this algorithm can identify the communities automatically, accurately and efficiently. Furthermore, this algorithm can also acquire higher values of modularity, NMI and density than the SOM algorithm does.

  2. Approximation Methods for Inference and Learning in Belief Networks: Progress and Future Directions

    National Research Council Canada - National Science Library

    Pazzan, Michael

    1997-01-01

    .... In this research project, we have investigated methods and implemented algorithms for efficiently making certain classes of inference in belief networks, and for automatically learning certain...

  3. On local optima in learning bayesian networks

    DEFF Research Database (Denmark)

    Dalgaard, Jens; Kocka, Tomas; Pena, Jose

    2003-01-01

    This paper proposes and evaluates the k-greedy equivalence search algorithm (KES) for learning Bayesian networks (BNs) from complete data. The main characteristic of KES is that it allows a trade-off between greediness and randomness, thus exploring different good local optima. When greediness...... is set at maximum, KES corresponds to the greedy equivalence search algorithm (GES). When greediness is kept at minimum, we prove that under mild assumptions KES asymptotically returns any inclusion optimal BN with nonzero probability. Experimental results for both synthetic and real data are reported...

  4. APPLICATION OF NEURAL NETWORK ALGORITHMS FOR BPM LINEARIZATION

    Energy Technology Data Exchange (ETDEWEB)

    Musson, John C. [JLAB; Seaton, Chad [JLAB; Spata, Mike F. [JLAB; Yan, Jianxun [JLAB

    2012-11-01

    Stripline BPM sensors contain inherent non-linearities, as a result of field distortions from the pickup elements. Many methods have been devised to facilitate corrections, often employing polynomial fitting. The cost of computation makes real-time correction difficult, particulalry when integer math is utilized. The application of neural-network technology, particularly the multi-layer perceptron algorithm, is proposed as an efficient alternative for electrode linearization. A process of supervised learning is initially used to determine the weighting coefficients, which are subsequently applied to the incoming electrode data. A non-linear layer, known as an activation layer, is responsible for the removal of saturation effects. Implementation of a perceptron in an FPGA-based software-defined radio (SDR) is presented, along with performance comparisons. In addition, efficient calculation of the sigmoidal activation function via the CORDIC algorithm is presented.

  5. Maximum Entropy Learning with Deep Belief Networks

    Directory of Open Access Journals (Sweden)

    Payton Lin

    2016-07-01

    Full Text Available Conventionally, the maximum likelihood (ML criterion is applied to train a deep belief network (DBN. We present a maximum entropy (ME learning algorithm for DBNs, designed specifically to handle limited training data. Maximizing only the entropy of parameters in the DBN allows more effective generalization capability, less bias towards data distributions, and robustness to over-fitting compared to ML learning. Results of text classification and object recognition tasks demonstrate ME-trained DBN outperforms ML-trained DBN when training data is limited.

  6. A simple and efficient algorithm for modeling modular complex networks

    Science.gov (United States)

    Kowalczyk, Mateusz; Fronczak, Piotr; Fronczak, Agata

    2017-09-01

    In this paper we introduce a new algorithm to generate networks in which node degrees and community sizes can follow any arbitrary distribution. We compare the quality and efficiency of the proposed algorithm and the well-known algorithm by Lancichinetti et al. In contrast to the later one, the new algorithm, at the cost of accuracy, allows to generate two orders of magnitude larger networks in a reasonable time and it can be easily described analytically.

  7. Recurrent neural networks training with stable bounding ellipsoid algorithm.

    Science.gov (United States)

    Yu, Wen; de Jesús Rubio, José

    2009-06-01

    Bounding ellipsoid (BE) algorithms offer an attractive alternative to traditional training algorithms for neural networks, for example, backpropagation and least squares methods. The benefits include high computational efficiency and fast convergence speed. In this paper, we propose an ellipsoid propagation algorithm to train the weights of recurrent neural networks for nonlinear systems identification. Both hidden layers and output layers can be updated. The stability of the BE algorithm is proven.

  8. Training product unit neural networks with genetic algorithms

    Science.gov (United States)

    Janson, D. J.; Frenzel, J. F.; Thelen, D. C.

    1991-01-01

    The training of product neural networks using genetic algorithms is discussed. Two unusual neural network techniques are combined; product units are employed instead of the traditional summing units and genetic algorithms train the network rather than backpropagation. As an example, a neural netork is trained to calculate the optimum width of transistors in a CMOS switch. It is shown how local minima affect the performance of a genetic algorithm, and one method of overcoming this is presented.

  9. Noise-enhanced clustering and competitive learning algorithms.

    Science.gov (United States)

    Osoba, Osonde; Kosko, Bart

    2013-01-01

    Noise can provably speed up convergence in many centroid-based clustering algorithms. This includes the popular k-means clustering algorithm. The clustering noise benefit follows from the general noise benefit for the expectation-maximization algorithm because many clustering algorithms are special cases of the expectation-maximization algorithm. Simulations show that noise also speeds up convergence in stochastic unsupervised competitive learning, supervised competitive learning, and differential competitive learning. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. A Comparative Analysis of Community Detection Algorithms on Artificial Networks.

    Science.gov (United States)

    Yang, Zhao; Algesheimer, René; Tessone, Claudio J

    2016-08-01

    Many community detection algorithms have been developed to uncover the mesoscopic properties of complex networks. However how good an algorithm is, in terms of accuracy and computing time, remains still open. Testing algorithms on real-world network has certain restrictions which made their insights potentially biased: the networks are usually small, and the underlying communities are not defined objectively. In this study, we employ the Lancichinetti-Fortunato-Radicchi benchmark graph to test eight state-of-the-art algorithms. We quantify the accuracy using complementary measures and algorithms' computing time. Based on simple network properties and the aforementioned results, we provide guidelines that help to choose the most adequate community detection algorithm for a given network. Moreover, these rules allow uncovering limitations in the use of specific algorithms given macroscopic network properties. Our contribution is threefold: firstly, we provide actual techniques to determine which is the most suited algorithm in most circumstances based on observable properties of the network under consideration. Secondly, we use the mixing parameter as an easily measurable indicator of finding the ranges of reliability of the different algorithms. Finally, we study the dependency with network size focusing on both the algorithm's predicting power and the effective computing time.

  11. Robustness of the ATLAS pixel clustering neural network algorithm

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00407780; The ATLAS collaboration

    2016-01-01

    Proton-proton collisions at the energy frontier puts strong constraints on track reconstruction algorithms. In the ATLAS track reconstruction algorithm, an artificial neural network is utilised to identify and split clusters of neighbouring read-out elements in the ATLAS pixel detector created by multiple charged particles. The robustness of the neural network algorithm is presented, probing its sensitivity to uncertainties in the detector conditions. The robustness is studied by evaluating the stability of the algorithm's performance under a range of variations in the inputs to the neural networks. Within reasonable variation magnitudes, the neural networks prove to be robust to most variation types.

  12. Comparison between extreme learning machine and wavelet neural networks in data classification

    Science.gov (United States)

    Yahia, Siwar; Said, Salwa; Jemai, Olfa; Zaied, Mourad; Ben Amar, Chokri

    2017-03-01

    Extreme learning Machine is a well known learning algorithm in the field of machine learning. It's about a feed forward neural network with a single-hidden layer. It is an extremely fast learning algorithm with good generalization performance. In this paper, we aim to compare the Extreme learning Machine with wavelet neural networks, which is a very used algorithm. We have used six benchmark data sets to evaluate each technique. These datasets Including Wisconsin Breast Cancer, Glass Identification, Ionosphere, Pima Indians Diabetes, Wine Recognition and Iris Plant. Experimental results have shown that both extreme learning machine and wavelet neural networks have reached good results.

  13. Learning from nature: Nature-inspired algorithms

    DEFF Research Database (Denmark)

    Albeanu, Grigore; Madsen, Henrik; Popentiu-Vladicescu, Florin

    2016-01-01

    .), genetic and evolutionary strategies, artificial immune systems etc. Well-known examples of applications include: aircraft wing design, wind turbine design, bionic car, bullet train, optimal decisions related to traffic, appropriate strategies to survive under a well-adapted immune system etc. Based....... This work reviews the most effective nature-inspired algorithms and describes learning strategies based on nature oriented thinking. Examples and the benefits obtained from applying nature-inspired strategies in test generation, learners group optimization, and artificial immune systems for learning...

  14. Image Recovery Algorithm Based on Learned Dictionary

    Directory of Open Access Journals (Sweden)

    Xinghui Zhu

    2014-01-01

    Full Text Available We proposed a recovery scheme for image deblurring. The scheme is under the framework of sparse representation and it has three main contributions. Firstly, considering the sparse property of natural image, the nonlocal overcompleted dictionaries are learned for image patches in our scheme. And, then, we coded the patches in each nonlocal clustering with the corresponding learned dictionary to recover the whole latent image. In addition, for some practical applications, we also proposed a method to evaluate the blur kernel to make the algorithm usable in blind image recovery. The experimental results demonstrated that the proposed scheme is competitive with some current state-of-the-art methods.

  15. An Efficient Hierarchy Algorithm for Community Detection in Complex Networks

    Directory of Open Access Journals (Sweden)

    Lili Zhang

    2014-01-01

    Full Text Available Community structure is one of the most fundamental and important topology characteristics of complex networks. The research on community structure has wide applications and is very important for analyzing the topology structure, understanding the functions, finding the hidden properties, and forecasting the time-varying of the networks. This paper analyzes some related algorithms and proposes a new algorithm—CN agglomerative algorithm based on graph theory and the local connectedness of network to find communities in network. We show this algorithm is distributed and polynomial; meanwhile the simulations show it is accurate and fine-grained. Furthermore, we modify this algorithm to get one modified CN algorithm and apply it to dynamic complex networks, and the simulations also verify that the modified CN algorithm has high accuracy too.

  16. The Hidden Flow Structure and Metric Space of Network Embedding Algorithms Based on Random Walks.

    Science.gov (United States)

    Gu, Weiwei; Gong, Li; Lou, Xiaodan; Zhang, Jiang

    2017-10-13

    Network embedding which encodes all vertices in a network as a set of numerical vectors in accordance with it's local and global structures, has drawn widespread attention. Network embedding not only learns significant features of a network, such as the clustering and linking prediction but also learns the latent vector representation of the nodes which provides theoretical support for a variety of applications, such as visualization, link prediction, node classification, and recommendation. As the latest progress of the research, several algorithms based on random walks have been devised. Although those algorithms have drawn much attention for their high scores in learning efficiency and accuracy, there is still a lack of theoretical explanation, and the transparency of those algorithms has been doubted. Here, we propose an approach based on the open-flow network model to reveal the underlying flow structure and its hidden metric space of different random walk strategies on networks. We show that the essence of embedding based on random walks is the latent metric structure defined on the open-flow network. This not only deepens our understanding of random- walk-based embedding algorithms but also helps in finding new potential applications in network embedding.

  17. Dictionary Learning Algorithms for Sparse Representation

    OpenAIRE

    Kreutz-Delgado, Kenneth; Murray, Joseph F.; Rao, Bhaskar D.; Engan, Kjersti; Lee, Te-Won; Sejnowski, Terrence J.

    2003-01-01

    Algorithms for data-driven learning of domain-specific overcomplete dictionaries are developed to obtain maximum likelihood and maximum a posteriori dictionary estimates based on the use of Bayesian models with concave/Schur-concave (CSC) negative log priors. Such priors are appropriate for obtaining sparse representations of environmental signals within an appropriately chosen (environmentally matched) dictionary. The elements of the dictionary can be interpreted as concepts, features, or wo...

  18. Learning Local Components to Understand Large Bayesian Networks

    DEFF Research Database (Denmark)

    Zeng, Yifeng; Xiang, Yanping; Cordero, Jorge

    2009-01-01

    (domain experts) to extract accurate information from a large Bayesian network due to dimensional difficulty. We define a formulation of local components and propose a clustering algorithm to learn such local components given complete data. The algorithm groups together most inter-relevant attributes...... in a domain. We evaluate its performance on three benchmark Bayesian networks and provide results in support. We further show that the learned components may represent local knowledge more precisely in comparison to the full Bayesian networks when working with a small amount of data....

  19. Metaheuristic Algorithms for Convolution Neural Network.

    Science.gov (United States)

    Rere, L M Rasdi; Fanany, Mohamad Ivan; Arymurthy, Aniati Murni

    2016-01-01

    A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent).

  20. Metaheuristic Algorithms for Convolution Neural Network

    Directory of Open Access Journals (Sweden)

    L. M. Rasdi Rere

    2016-01-01

    Full Text Available A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN, a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent.

  1. TPDA2 ALGORITHM FOR LEARNING BN STRUCTURE FROM MISSING VALUE AND OUTLIERS IN DATA MINING

    Directory of Open Access Journals (Sweden)

    Benhard Sitohang

    2006-01-01

    Full Text Available Three-Phase Dependency Analysis (TPDA algorithm was proved as most efficient algorithm (which requires at most O(N4 Conditional Independence (CI tests. By integrating TPDA with "node topological sort algorithm", it can be used to learn Bayesian Network (BN structure from missing value (named as TPDA1 algorithm. And then, outlier can be reduced by applying an "outlier detection & removal algorithm" as pre-processing for TPDA1. TPDA2 algorithm proposed consists of those ideas, outlier detection & removal, TPDA, and node topological sort node.

  2. "FORCE" learning in recurrent neural networks as data assimilation

    Science.gov (United States)

    Duane, Gregory S.

    2017-12-01

    It is shown that the "FORCE" algorithm for learning in arbitrarily connected networks of simple neuronal units can be cast as a Kalman Filter, with a particular state-dependent form for the background error covariances. The resulting interpretation has implications for initialization of the learning algorithm, leads to an extension to include interactions between the weight updates for different neurons, and can represent relationships within groups of multiple target output signals.

  3. Cooperative Learning for Distributed In-Network Traffic Classification

    Science.gov (United States)

    Joseph, S. B.; Loo, H. R.; Ismail, I.; Andromeda, T.; Marsono, M. N.

    2017-04-01

    Inspired by the concept of autonomic distributed/decentralized network management schemes, we consider the issue of information exchange among distributed network nodes to network performance and promote scalability for in-network monitoring. In this paper, we propose a cooperative learning algorithm for propagation and synchronization of network information among autonomic distributed network nodes for online traffic classification. The results show that network nodes with sharing capability perform better with a higher average accuracy of 89.21% (sharing data) and 88.37% (sharing clusters) compared to 88.06% for nodes without cooperative learning capability. The overall performance indicates that cooperative learning is promising for distributed in-network traffic classification.

  4. Study of Vivaldi Algorithm in Energy Constraint Networks

    Directory of Open Access Journals (Sweden)

    Tomas Handl

    2011-01-01

    Full Text Available The presented paper discusses a viability of Vivaldi localization algorithm and synthetic coordinate system in general to be used for localization purposes in energy constraint networks. Synthetic coordinate systems achieve good results in IP based networks and thus, it could be a perspective way of node localization in other types of networks. However, transfer of Vivaldi algorithm into a different kind of network is a difficult task because the different basic characteristic of the network and network nodes. In this paper we focus on the different aspects of IP based networks and networks of wireless sensors which suffer from strict energy limitation. During our work we proposed a modified version of two dimensional Vivaldi localization algorithm with height system and developed a simulator tool for initial investigation of its function in ad-hoc energy constraint networks.

  5. A Location-Aware Vertical Handoff Algorithm for Hybrid Networks

    KAUST Repository

    Mehbodniya, Abolfazl

    2010-07-01

    One of the main objectives of wireless networking is to provide mobile users with a robust connection to different networks so that they can move freely between heterogeneous networks while running their computing applications with no interruption. Horizontal handoff, or generally speaking handoff, is a process which maintains a mobile user\\'s active connection as it moves within a wireless network, whereas vertical handoff (VHO) refers to handover between different types of networks or different network layers. Optimizing VHO process is an important issue, required to reduce network signalling and mobile device power consumption as well as to improve network quality of service (QoS) and grade of service (GoS). In this paper, a VHO algorithm in multitier (overlay) networks is proposed. This algorithm uses pattern recognition to estimate user\\'s position, and decides on the handoff based on this information. For the pattern recognition algorithm structure, the probabilistic neural network (PNN) which has considerable simplicity and efficiency over existing pattern classifiers is used. Further optimization is proposed to improve the performance of the PNN algorithm. Performance analysis and comparisons with the existing VHO algorithm are provided and demonstrate a significant improvement with the proposed algorithm. Furthermore, incorporating the proposed algorithm, a structure is proposed for VHO from the medium access control (MAC) layer point of view. © 2010 ACADEMY PUBLISHER.

  6. Structure Learning in Power Distribution Networks

    Energy Technology Data Exchange (ETDEWEB)

    Deka, Deepjyoti [Univ. of Texas, Austin, TX (United States); Chertkov, Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Backhaus, Scott N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-01-13

    Traditionally power distribution networks are either not observable or only partially observable. This complicates development and implementation of new smart grid technologies, such as these related to demand response, outage detection and management, and improved load-monitoring. Here, inspired by proliferation of the metering technology, we discuss statistical estimation problems in structurally loopy but operationally radial distribution grids consisting in learning operational layout of the network from measurements, e.g. voltage data, which are either already available or can be made available with a relatively minor investment. Our newly suggested algorithms apply to a wide range of realistic scenarios. The algorithms are also computationally efficient – polynomial in time – which is proven theoretically and illustrated computationally on a number of test cases. The technique developed can be applied to detect line failures in real time as well as to understand the scope of possible adversarial attacks on the grid.

  7. WEB BASED LEARNING OF COMPUTER NETWORK COURSE

    Directory of Open Access Journals (Sweden)

    Hakan KAPTAN

    2004-04-01

    Full Text Available As a result of developing on Internet and computer fields, web based education becomes one of the area that many improving and research studies are done. In this study, web based education materials have been explained for multimedia animation and simulation aided Computer Networks course in Technical Education Faculties. Course content is formed by use of university course books, web based education materials and technology web pages of companies. Course content is formed by texts, pictures and figures to increase motivation of students and facilities of learning some topics are supported by animations. Furthermore to help working principles of routing algorithms and congestion control algorithms simulators are constructed in order to interactive learning

  8. Collaborative learning in networks.

    Science.gov (United States)

    Mason, Winter; Watts, Duncan J

    2012-01-17

    Complex problems in science, business, and engineering typically require some tradeoff between exploitation of known solutions and exploration for novel ones, where, in many cases, information about known solutions can also disseminate among individual problem solvers through formal or informal networks. Prior research on complex problem solving by collectives has found the counterintuitive result that inefficient networks, meaning networks that disseminate information relatively slowly, can perform better than efficient networks for problems that require extended exploration. In this paper, we report on a series of 256 Web-based experiments in which groups of 16 individuals collectively solved a complex problem and shared information through different communication networks. As expected, we found that collective exploration improved average success over independent exploration because good solutions could diffuse through the network. In contrast to prior work, however, we found that efficient networks outperformed inefficient networks, even in a problem space with qualitative properties thought to favor inefficient networks. We explain this result in terms of individual-level explore-exploit decisions, which we find were influenced by the network structure as well as by strategic considerations and the relative payoff between maxima. We conclude by discussing implications for real-world problem solving and possible extensions.

  9. Optimization in Quaternion Dynamic Systems: Gradient, Hessian, and Learning Algorithms.

    Science.gov (United States)

    Xu, Dongpo; Xia, Yili; Mandic, Danilo P

    2016-02-01

    The optimization of real scalar functions of quaternion variables, such as the mean square error or array output power, underpins many practical applications. Solutions typically require the calculation of the gradient and Hessian. However, real functions of quaternion variables are essentially nonanalytic, which are prohibitive to the development of quaternion-valued learning systems. To address this issue, we propose new definitions of quaternion gradient and Hessian, based on the novel generalized Hamilton-real (GHR) calculus, thus making a possible efficient derivation of general optimization algorithms directly in the quaternion field, rather than using the isomorphism with the real domain, as is current practice. In addition, unlike the existing quaternion gradients, the GHR calculus allows for the product and chain rule, and for a one-to-one correspondence of the novel quaternion gradient and Hessian with their real counterparts. Properties of the quaternion gradient and Hessian relevant to numerical applications are also introduced, opening a new avenue of research in quaternion optimization and greatly simplified the derivations of learning algorithms. The proposed GHR calculus is shown to yield the same generic algorithm forms as the corresponding real- and complex-valued algorithms. Advantages of the proposed framework are illuminated over illustrative simulations in quaternion signal processing and neural networks.

  10. Enhanced Handover Decision Algorithm in Heterogeneous Wireless Network.

    Science.gov (United States)

    Abdullah, Radhwan Mohamed; Zukarnain, Zuriati Ahmad

    2017-07-14

    Transferring a huge amount of data between different network locations over the network links depends on the network's traffic capacity and data rate. Traditionally, a mobile device may be moved to achieve the operations of vertical handover, considering only one criterion, that is the Received Signal Strength (RSS). The use of a single criterion may cause service interruption, an unbalanced network load and an inefficient vertical handover. In this paper, we propose an enhanced vertical handover decision algorithm based on multiple criteria in the heterogeneous wireless network. The algorithm consists of three technology interfaces: Long-Term Evolution (LTE), Worldwide interoperability for Microwave Access (WiMAX) and Wireless Local Area Network (WLAN). It also employs three types of vertical handover decision algorithms: equal priority, mobile priority and network priority. The simulation results illustrate that the three types of decision algorithms outperform the traditional network decision algorithm in terms of handover number probability and the handover failure probability. In addition, it is noticed that the network priority handover decision algorithm produces better results compared to the equal priority and the mobile priority handover decision algorithm. Finally, the simulation results are validated by the analytical model.

  11. An Energy Efficient Multipath Routing Algorithm for Wireless Sensor Networks

    NARCIS (Netherlands)

    Dulman, S.O.; Wu Jian, W.J.; Havinga, Paul J.M.

    In this paper we introduce a new routing algorithm for wireless sensor networks. The aim of this algorithm is to provide on-demand multiple disjoint paths between a data source and a destination. Our Multipath On-Demand Routing Algorithm (MDR) improves the reliability of data routing in a wireless

  12. Spectrum Assignment Algorithm for Cognitive Machine-to-Machine Networks

    Directory of Open Access Journals (Sweden)

    Soheil Rostami

    2016-01-01

    Full Text Available A novel aggregation-based spectrum assignment algorithm for Cognitive Machine-To-Machine (CM2M networks is proposed. The introduced algorithm takes practical constraints including interference to the Licensed Users (LUs, co-channel interference (CCI among CM2M devices, and Maximum Aggregation Span (MAS into consideration. Simulation results show clearly that the proposed algorithm outperforms State-Of-The-Art (SOTA algorithms in terms of spectrum utilisation and network capacity. Furthermore, the convergence analysis of the proposed algorithm verifies its high convergence rate.

  13. Adaptive clustering algorithm for community detection in complex networks

    Science.gov (United States)

    Ye, Zhenqing; Hu, Songnian; Yu, Jun

    2008-10-01

    Community structure is common in various real-world networks; methods or algorithms for detecting such communities in complex networks have attracted great attention in recent years. We introduced a different adaptive clustering algorithm capable of extracting modules from complex networks with considerable accuracy and robustness. In this approach, each node in a network acts as an autonomous agent demonstrating flocking behavior where vertices always travel toward their preferable neighboring groups. An optimal modular structure can emerge from a collection of these active nodes during a self-organization process where vertices constantly regroup. In addition, we show that our algorithm appears advantageous over other competing methods (e.g., the Newman-fast algorithm) through intensive evaluation. The applications in three real-world networks demonstrate the superiority of our algorithm to find communities that are parallel with the appropriate organization in reality.

  14. Learning of N-layers neural network

    Directory of Open Access Journals (Sweden)

    Vladimír Konečný

    2005-01-01

    Full Text Available In the last decade we can observe increasing number of applications based on the Artificial Intelligence that are designed to solve problems from different areas of human activity. The reason why there is so much interest in these technologies is that the classical way of solutions does not exist or these technologies are not suitable because of their robustness. They are often used in applications like Business Intelligence that enable to obtain useful information for high-quality decision-making and to increase competitive advantage.One of the most widespread tools for the Artificial Intelligence are the artificial neural networks. Their high advantage is relative simplicity and the possibility of self-learning based on set of pattern situations.For the learning phase is the most commonly used algorithm back-propagation error (BPE. The base of BPE is the method minima of error function representing the sum of squared errors on outputs of neural net, for all patterns of the learning set. However, while performing BPE and in the first usage, we can find out that it is necessary to complete the handling of the learning factor by suitable method. The stability of the learning process and the rate of convergence depend on the selected method. In the article there are derived two functions: one function for the learning process management by the relative great error function value and the second function when the value of error function approximates to global minimum.The aim of the article is to introduce the BPE algorithm in compact matrix form for multilayer neural networks, the derivation of the learning factor handling method and the presentation of the results.

  15. A generic algorithm for layout of biological networks.

    Science.gov (United States)

    Schreiber, Falk; Dwyer, Tim; Marriott, Kim; Wybrow, Michael

    2009-11-12

    Biological networks are widely used to represent processes in biological systems and to capture interactions and dependencies between biological entities. Their size and complexity is steadily increasing due to the ongoing growth of knowledge in the life sciences. To aid understanding of biological networks several algorithms for laying out and graphically representing networks and network analysis results have been developed. However, current algorithms are specialized to particular layout styles and therefore different algorithms are required for each kind of network and/or style of layout. This increases implementation effort and means that new algorithms must be developed for new layout styles. Furthermore, additional effort is necessary to compose different layout conventions in the same diagram. Also the user cannot usually customize the placement of nodes to tailor the layout to their particular need or task and there is little support for interactive network exploration. We present a novel algorithm to visualize different biological networks and network analysis results in meaningful ways depending on network types and analysis outcome. Our method is based on constrained graph layout and we demonstrate how it can handle the drawing conventions used in biological networks. The presented algorithm offers the ability to produce many of the fundamental popular drawing styles while allowing the exibility of constraints to further tailor these layouts.

  16. A generic algorithm for layout of biological networks

    Directory of Open Access Journals (Sweden)

    Dwyer Tim

    2009-11-01

    Full Text Available Abstract Background Biological networks are widely used to represent processes in biological systems and to capture interactions and dependencies between biological entities. Their size and complexity is steadily increasing due to the ongoing growth of knowledge in the life sciences. To aid understanding of biological networks several algorithms for laying out and graphically representing networks and network analysis results have been developed. However, current algorithms are specialized to particular layout styles and therefore different algorithms are required for each kind of network and/or style of layout. This increases implementation effort and means that new algorithms must be developed for new layout styles. Furthermore, additional effort is necessary to compose different layout conventions in the same diagram. Also the user cannot usually customize the placement of nodes to tailor the layout to their particular need or task and there is little support for interactive network exploration. Results We present a novel algorithm to visualize different biological networks and network analysis results in meaningful ways depending on network types and analysis outcome. Our method is based on constrained graph layout and we demonstrate how it can handle the drawing conventions used in biological networks. Conclusion The presented algorithm offers the ability to produce many of the fundamental popular drawing styles while allowing the exibility of constraints to further tailor these layouts.

  17. An artificial immune system algorithm approach for reconfiguring distribution network

    Science.gov (United States)

    Syahputra, Ramadoni; Soesanti, Indah

    2017-08-01

    This paper proposes an artificial immune system (AIS) algorithm approach for reconfiguring distribution network with the presence distributed generators (DG). The distribution network with high-performance is a network that has a low power loss, better voltage profile, and loading balance among feeders. The task for improving the performance of the distribution network is optimization of network configuration. The optimization has become a necessary study with the presence of DG in entire networks. In this work, optimization of network configuration is based on an AIS algorithm. The methodology has been tested in a model of 33 bus IEEE radial distribution networks with and without DG integration. The results have been showed that the optimal configuration of the distribution network is able to reduce power loss and to improve the voltage profile of the distribution network significantly.

  18. Hierarchical Artificial Bee Colony Algorithm for RFID Network Planning Optimization

    Directory of Open Access Journals (Sweden)

    Lianbo Ma

    2014-01-01

    Full Text Available This paper presents a novel optimization algorithm, namely, hierarchical artificial bee colony optimization, called HABC, to tackle the radio frequency identification network planning (RNP problem. In the proposed multilevel model, the higher-level species can be aggregated by the subpopulations from lower level. In the bottom level, each subpopulation employing the canonical ABC method searches the part-dimensional optimum in parallel, which can be constructed into a complete solution for the upper level. At the same time, the comprehensive learning method with crossover and mutation operators is applied to enhance the global search ability between species. Experiments are conducted on a set of 10 benchmark optimization problems. The results demonstrate that the proposed HABC obtains remarkable performance on most chosen benchmark functions when compared to several successful swarm intelligence and evolutionary algorithms. Then HABC is used for solving the real-world RNP problem on two instances with different scales. Simulation results show that the proposed algorithm is superior for solving RNP, in terms of optimization accuracy and computation robustness.

  19. Classification and learning using genetic algorithms applications in Bioinformatics and Web Intelligence

    CERN Document Server

    Bandyopadhyay, Sanghamitra

    2007-01-01

    This book provides a unified framework that describes how genetic learning can be used to design pattern recognition and learning systems. It examines how a search technique, the genetic algorithm, can be used for pattern classification mainly through approximating decision boundaries. Coverage also demonstrates the effectiveness of the genetic classifiers vis-à-vis several widely used classifiers, including neural networks.

  20. FAST ZEROX ALGORITHM FOR ROUTING IN OPTICAL MULTISTAGE INTERCONNECTION NETWORKS

    Directory of Open Access Journals (Sweden)

    T. D. Shahida

    2010-05-01

    Full Text Available Based on the ZeroX algorithm, a fast and efficient crosstalk-free time- domain algorithm called the Fast ZeroX or shortly FastZ_X algorithm is proposed for solving optical crosstalk problem in optical Omega multistage interconnection networks. A new pre-routing technique called the inverse Conflict Matrix (iCM is also introduced to map all possible conflicts identified between each node in the network as another representation of the standard conflict matrix commonly used in previous Zero-based algorithms. It is shown that using the new iCM, the original ZeroX algorithm is simplified, thus improved the algorithm by reducing the time to complete routing process. Through simulation modeling, the new approach yields the best performance in terms of minimal routing time in comparison to the original ZeroX algorithm as well as previous algorithms tested for comparison in this paper.

  1. Power control algorithms for mobile ad hoc networks

    Directory of Open Access Journals (Sweden)

    Nuraj L. Pradhan

    2011-07-01

    We will also focus on an adaptive distributed power management (DISPOW algorithm as an example of the multi-parameter optimization approach which manages the transmit power of nodes in a wireless ad hoc network to preserve network connectivity and cooperatively reduce interference. We will show that the algorithm in a distributed manner builds a unique stable network topology tailored to its surrounding node density and propagation environment over random topologies in a dynamic mobile wireless channel.

  2. A Network Selection Algorithm Considering Power Consumption in Hybrid Wireless Networks

    Science.gov (United States)

    Joe, Inwhee; Kim, Won-Tae; Hong, Seokjoon

    In this paper, we propose a novel network selection algorithm considering power consumption in hybrid wireless networks for vertical handover. CDMA, WiBro, WLAN networks are candidate networks for this selection algorithm. This algorithm is composed of the power consumption prediction algorithm and the final network selection algorithm. The power consumption prediction algorithm estimates the expected lifetime of the mobile station based on the current battery level, traffic class and power consumption for each network interface card of the mobile station. If the expected lifetime of the mobile station in a certain network is not long enough compared the handover delay, this particular network will be removed from the candidate network list, thereby preventing unnecessary handovers in the preprocessing procedure. On the other hand, the final network selection algorithm consists of AHP (Analytic Hierarchical Process) and GRA (Grey Relational Analysis). The global factors of the network selection structure are QoS, cost and lifetime. If user preference is lifetime, our selection algorithm selects the network that offers longest service duration due to low power consumption. Also, we conduct some simulations using the OPNET simulation tool. The simulation results show that the proposed algorithm provides longer lifetime in the hybrid wireless network environment.

  3. Feed Forward Neural Network Algorithm for Frequent Patterns Mining

    OpenAIRE

    Dr. K.R.Pardasani; Sanjay Sharma; Amit Bhagat

    2010-01-01

    Association rule mining is used to find relationships among items in large data sets. Frequent patterns mining is an important aspect in association rule mining. In this paper, an efficient algorithm named Apriori-Feed Forward(AFF) based on Apriori algorithm and the Feed Forward Neural Network is presented to mine frequent patterns. Apriori algorithm scans database many times to generate frequent itemsets whereas Apriori-Feed Forward(AFF) algorithm scans database Only Once. Computational resu...

  4. Machine learning for identifying botnet network traffic

    DEFF Research Database (Denmark)

    Stevanovic, Matija; Pedersen, Jens Myrup

    2013-01-01

    . Due to promise of non-invasive and resilient detection, botnet detection based on network traffic analysis has drawn a special attention of the research community. Furthermore, many authors have turned their attention to the use of machine learning algorithms as the mean of inferring botnet......-related knowledge from the monitored traffic. This paper presents a review of contemporary botnet detection methods that use machine learning as a tool of identifying botnet-related traffic. The main goal of the paper is to provide a comprehensive overview on the field by summarizing current scientific efforts....... The contribution of the paper is three-fold. First, the paper provides a detailed insight on the existing detection methods by investigating which bot-related heuristic were assumed by the detection systems and how different machine learning techniques were adapted in order to capture botnet-related knowledge...

  5. Inference of time-delayed gene regulatory networks based on dynamic Bayesian network hybrid learning method.

    Science.gov (United States)

    Yu, Bin; Xu, Jia-Meng; Li, Shan; Chen, Cheng; Chen, Rui-Xin; Wang, Lei; Zhang, Yan; Wang, Ming-Hui

    2017-10-06

    Gene regulatory networks (GRNs) research reveals complex life phenomena from the perspective of gene interaction, which is an important research field in systems biology. Traditional Bayesian networks have a high computational complexity, and the network structure scoring model has a single feature. Information-based approaches cannot identify the direction of regulation. In order to make up for the shortcomings of the above methods, this paper presents a novel hybrid learning method (DBNCS) based on dynamic Bayesian network (DBN) to construct the multiple time-delayed GRNs for the first time, combining the comprehensive score (CS) with the DBN model. DBNCS algorithm first uses CMI2NI (conditional mutual inclusive information-based network inference) algorithm for network structure profiles learning, namely the construction of search space. Then the redundant regulations are removed by using the recursive optimization algorithm (RO), thereby reduce the false positive rate. Secondly, the network structure profiles are decomposed into a set of cliques without loss, which can significantly reduce the computational complexity. Finally, DBN model is used to identify the direction of gene regulation within the cliques and search for the optimal network structure. The performance of DBNCS algorithm is evaluated by the benchmark GRN datasets from DREAM challenge as well as the SOS DNA repair network in Escherichia coli, and compared with other state-of-the-art methods. The experimental results show the rationality of the algorithm design and the outstanding performance of the GRNs.

  6. Learning Python network programming

    CERN Document Server

    Sarker, M O Faruque

    2015-01-01

    If you're a Python developer or a system administrator with Python experience and you're looking to take your first steps in network programming, then this book is for you. Basic knowledge of Python is assumed.

  7. Machine learning based global particle indentification algorithms at LHCb experiment

    CERN Multimedia

    Derkach, Denis; Likhomanenko, Tatiana; Rogozhnikov, Aleksei; Ratnikov, Fedor

    2017-01-01

    One of the most important aspects of data processing at LHC experiments is the particle identification (PID) algorithm. In LHCb, several different sub-detector systems provide PID information: the Ring Imaging CHerenkov (RICH) detector, the hadronic and electromagnetic calorimeters, and the muon chambers. To improve charged particle identification, several neural networks including a deep architecture and gradient boosting have been applied to data. These new approaches provide higher identification efficiencies than existing implementations for all charged particle types. It is also necessary to achieve a flat dependency between efficiencies and spectator variables such as particle momentum, in order to reduce systematic uncertainties during later stages of data analysis. For this purpose, "flat” algorithms that guarantee the flatness property for efficiencies have also been developed. This talk presents this new approach based on machine learning and its performance.

  8. Fall detection using supervised machine learning algorithms: A comparative study

    KAUST Repository

    Zerrouki, Nabil

    2017-01-05

    Fall incidents are considered as the leading cause of disability and even mortality among older adults. To address this problem, fall detection and prevention fields receive a lot of intention over the past years and attracted many researcher efforts. We present in the current study an overall performance comparison between fall detection systems using the most popular machine learning approaches which are: Naïve Bayes, K nearest neighbor, neural network, and support vector machine. The analysis of the classification power associated to these most widely utilized algorithms is conducted on two fall detection databases namely FDD and URFD. Since the performance of the classification algorithm is inherently dependent on the features, we extracted and used the same features for all classifiers. The classification evaluation is conducted using different state of the art statistical measures such as the overall accuracy, the F-measure coefficient, and the area under ROC curve (AUC) value.

  9. Language Choice & Global Learning Networks

    Directory of Open Access Journals (Sweden)

    Dennis Sayers

    1995-05-01

    Full Text Available How can other languages be used in conjunction with English to further intercultural and multilingual learning when teachers and students participate in computer-based global learning networks? Two portraits are presented of multilingual activities in the Orillas and I*EARN learning networks, and are discussed as examples of the principal modalities of communication employed in networking projects between distant classes. Next, an important historical precedent --the social controversy which accompanied the introduction of telephone technology at the end of the last century-- is examined in terms of its implications for language choice in contemporary classroom telecomputing projects. Finally, recommendations are offered to guide decision making concerning the role of language choice in promoting collaborative critical inquiry.

  10. Reconstructing Causal Biological Networks through Active Learning.

    Directory of Open Access Journals (Sweden)

    Hyunghoon Cho

    Full Text Available Reverse-engineering of biological networks is a central problem in systems biology. The use of intervention data, such as gene knockouts or knockdowns, is typically used for teasing apart causal relationships among genes. Under time or resource constraints, one needs to carefully choose which intervention experiments to carry out. Previous approaches for selecting most informative interventions have largely been focused on discrete Bayesian networks. However, continuous Bayesian networks are of great practical interest, especially in the study of complex biological systems and their quantitative properties. In this work, we present an efficient, information-theoretic active learning algorithm for Gaussian Bayesian networks (GBNs, which serve as important models for gene regulatory networks. In addition to providing linear-algebraic insights unique to GBNs, leading to significant runtime improvements, we demonstrate the effectiveness of our method on data simulated with GBNs and the DREAM4 network inference challenge data sets. Our method generally leads to faster recovery of underlying network structure and faster convergence to final distribution of confidence scores over candidate graph structures using the full data, in comparison to random selection of intervention experiments.

  11. Fast linear algorithms for machine learning

    Science.gov (United States)

    Lu, Yichao

    Nowadays linear methods like Regression, Principal Component Analysis and Canonical Correlation Analysis are well understood and widely used by the machine learning community for predictive modeling and feature generation. Generally speaking, all these methods aim at capturing interesting subspaces in the original high dimensional feature space. Due to the simple linear structures, these methods all have a closed form solution which makes computation and theoretical analysis very easy for small datasets. However, in modern machine learning problems it's very common for a dataset to have millions or billions of features and samples. In these cases, pursuing the closed form solution for these linear methods can be extremely slow since it requires multiplying two huge matrices and computing inverse, inverse square root, QR decomposition or Singular Value Decomposition (SVD) of huge matrices. In this thesis, we consider three fast algorithms for computing Regression and Canonical Correlation Analysis approximate for huge datasets.

  12. Collaborative Supervised Learning for Sensor Networks

    Science.gov (United States)

    Wagstaff, Kiri L.; Rebbapragada, Umaa; Lane, Terran

    2011-01-01

    Collaboration methods for distributed machine-learning algorithms involve the specification of communication protocols for the learners, which can query other learners and/or broadcast their findings preemptively. Each learner incorporates information from its neighbors into its own training set, and they are thereby able to bootstrap each other to higher performance. Each learner resides at a different node in the sensor network and makes observations (collects data) independently of the other learners. After being seeded with an initial labeled training set, each learner proceeds to learn in an iterative fashion. New data is collected and classified. The learner can then either broadcast its most confident classifications for use by other learners, or can query neighbors for their classifications of its least confident items. As such, collaborative learning combines elements of both passive (broadcast) and active (query) learning. It also uses ideas from ensemble learning to combine the multiple responses to a given query into a single useful label. This approach has been evaluated against current non-collaborative alternatives, including training a single classifier and deploying it at all nodes with no further learning possible, and permitting learners to learn from their own most confident judgments, absent interaction with their neighbors. On several data sets, it has been consistently found that active collaboration is the best strategy for a distributed learner network. The main advantages include the ability for learning to take place autonomously by collaboration rather than by requiring intervention from an oracle (usually human), and also the ability to learn in a distributed environment, permitting decisions to be made in situ and to yield faster response time.

  13. Congested Link Inference Algorithms in Dynamic Routing IP Network

    Directory of Open Access Journals (Sweden)

    Yu Chen

    2017-01-01

    Full Text Available The performance descending of current congested link inference algorithms is obviously in dynamic routing IP network, such as the most classical algorithm CLINK. To overcome this problem, based on the assumptions of Markov property and time homogeneity, we build a kind of Variable Structure Discrete Dynamic Bayesian (VSDDB network simplified model of dynamic routing IP network. Under the simplified VSDDB model, based on the Bayesian Maximum A Posteriori (BMAP and Rest Bayesian Network Model (RBNM, we proposed an Improved CLINK (ICLINK algorithm. Considering the concurrent phenomenon of multiple link congestion usually happens, we also proposed algorithm CLILRS (Congested Link Inference algorithm based on Lagrangian Relaxation Subgradient to infer the set of congested links. We validated our results by the experiments of analogy, simulation, and actual Internet.

  14. Solving Hub Network Problem Using Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Mursyid Hasan Basri

    2012-01-01

    Full Text Available This paper addresses a network problem that described as follows. There are n ports that interact, and p of those will be designated as hubs. All hubs are fully interconnected. Each spoke will be allocated to only one of available hubs. Direct connection between two spokes is allowed only if they are allocated to the same hub. The latter is a distinct characteristic that differs it from pure hub-and-spoke system. In case of pure hub-and-spoke system, direct connection between two spokes is not allowed. The problem is where to locate hub ports and to which hub a spoke should be allocated so that total transportation cost is minimum. In the first model, there are some additional aspects are taken into consideration in order to achieve a better representation of the problem. The first, weekly service should be accomplished. Secondly, various vessel types should be considered. The last, a concept of inter-hub discount factor is introduced. Regarding the last aspect, it represents cost reduction factor at hub ports due to economies of scale. In practice, it is common that the cost rate for inter-hub movement is less than the cost rate for movement between hub and origin/destination. In this first model, inter-hub discount factor is assumed independent with amount of flows on inter-hub links (denoted as flow-independent discount policy. The results indicated that the patterns of enlargement of container ship size, to some degree, are similar with those in Kurokawa study. However, with regard to hub locations, the results have not represented the real practice. In the proposed model, unsatisfactory result on hub locations is addressed. One aspect that could possibly be improved to find better hub locations is inter-hub discount factor. Then inter-hub discount factor is assumed to depend on amount of inter-hub flows (denoted as flow-dependent discount policy. There are two discount functions examined in this paper. Both functions are characterized by

  15. Learning oncogenetic networks by reducing to mixed integer linear programming.

    Science.gov (United States)

    Shahrabi Farahani, Hossein; Lagergren, Jens

    2013-01-01

    Cancer can be a result of accumulation of different types of genetic mutations such as copy number aberrations. The data from tumors are cross-sectional and do not contain the temporal order of the genetic events. Finding the order in which the genetic events have occurred and progression pathways are of vital importance in understanding the disease. In order to model cancer progression, we propose Progression Networks, a special case of Bayesian networks, that are tailored to model disease progression. Progression networks have similarities with Conjunctive Bayesian Networks (CBNs) [1],a variation of Bayesian networks also proposed for modeling disease progression. We also describe a learning algorithm for learning Bayesian networks in general and progression networks in particular. We reduce the hard problem of learning the Bayesian and progression networks to Mixed Integer Linear Programming (MILP). MILP is a Non-deterministic Polynomial-time complete (NP-complete) problem for which very good heuristics exists. We tested our algorithm on synthetic and real cytogenetic data from renal cell carcinoma. We also compared our learned progression networks with the networks proposed in earlier publications. The software is available on the website https://bitbucket.org/farahani/diprog.

  16. Analysis of Community Detection Algorithms for Large Scale Cyber Networks

    Energy Technology Data Exchange (ETDEWEB)

    Mane, Prachita; Shanbhag, Sunanda; Kamath, Tanmayee; Mackey, Patrick S.; Springer, John

    2016-09-30

    The aim of this project is to use existing community detection algorithms on an IP network dataset to create supernodes within the network. This study compares the performance of different algorithms on the network in terms of running time. The paper begins with an introduction to the concept of clustering and community detection followed by the research question that the team aimed to address. Further the paper describes the graph metrics that were considered in order to shortlist algorithms followed by a brief explanation of each algorithm with respect to the graph metric on which it is based. The next section in the paper describes the methodology used by the team in order to run the algorithms and determine which algorithm is most efficient with respect to running time. Finally, the last section of the paper includes the results obtained by the team and a conclusion based on those results as well as future work.

  17. The Vital Network: An Algorithmic Milieu of Communication and Control

    Directory of Open Access Journals (Sweden)

    Sandra Robinson

    2016-09-01

    Full Text Available The biological turn in computing has influenced the development of algorithmic control and what I call the vital network: a dynamic, relational, and generative assemblage that is self-organizing in response to the heterogeneity of contemporary network processes, connections, and communication. I discuss this biological turn in computation and control for communication alongside historically significant developments in cybernetics that set out the foundation for the development of self-regulating computer systems. Control is shifting away from models that historically relied on the human-animal model of cognition to govern communication and control, as in early cybernetics and computer science, to a decentred, nonhuman model of control by algorithm for communication and networks. To illustrate the rise of contemporary algorithmic control, I outline a particular example, that of the biologically-inspired routing algorithm known as a ‘quorum sensing’ algorithm. The increasing expansion of algorithms as a sense-making apparatus is important in the context of social media, but also in the subsystems that coordinate networked flows of information. In that domain, algorithms are not inferring categories of identity, sociality, and practice associated with Internet consumers, rather, these algorithms are designed to act on information flows as they are transmitted along the network. The development of autonomous control realized through the power of the algorithm to monitor, sort, organize, determine, and transmit communication is the form of control emerging as a postscript to Gilles Deleuze’s ‘postscript on societies of control.’

  18. Changing Conditions for Networked Learning?

    DEFF Research Database (Denmark)

    Ryberg, Thomas

    2011-01-01

    of social technologies. I argue that we are seeing the emergence of new architectures and scales of participation, collaboration and networking e.g. through interesting formations of learning networks at different levels of scale, for different purposes and often bridging boundaries such as formal......In this talk I should like to initially take a critical look at popular ideas and discourses related to web 2.0, social technologies and learning. I argue that many of the pedagogical ideals particularly associated with web 2.0 have a longer history and background, which is often forgotten...

  19. Multidimensional Scaling Localization Algorithm in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Zhang Dongyang

    2014-02-01

    Full Text Available Due to the localization algorithm in large-scale wireless sensor network exists shortcomings both in positioning accuracy and time complexity compared to traditional localization algorithm, this paper presents a fast multidimensional scaling location algorithm. By positioning algorithm for fast multidimensional scaling, fast mapping initialization, fast mapping and coordinate transform can get schematic coordinates of node, coordinates Initialize of MDS algorithm, an accurate estimate of the node coordinates and using the PRORUSTES to analysis alignment of the coordinate and final position coordinates of nodes etc. There are four steps, and the thesis gives specific implementation steps of the algorithm. Finally, compared with stochastic algorithms and classical MDS algorithm experiment, the thesis takes application of specific examples. Experimental results show that: the proposed localization algorithm has fast multidimensional scaling positioning accuracy in ensuring certain circumstances, but also greatly improves the speed of operation.

  20. Convergence of Batch Split-Complex Backpropagation Algorithm for Complex-Valued Neural Networks

    Directory of Open Access Journals (Sweden)

    Huisheng Zhang

    2009-01-01

    Full Text Available The batch split-complex backpropagation (BSCBP algorithm for training complex-valued neural networks is considered. For constant learning rate, it is proved that the error function of BSCBP algorithm is monotone during the training iteration process, and the gradient of the error function tends to zero. By adding a moderate condition, the weights sequence itself is also proved to be convergent. A numerical example is given to support the theoretical analysis.

  1. An improved localization algorithm based on genetic algorithm in wireless sensor networks.

    Science.gov (United States)

    Peng, Bo; Li, Lei

    2015-04-01

    Wireless sensor network (WSN) are widely used in many applications. A WSN is a wireless decentralized structure network comprised of nodes, which autonomously set up a network. The node localization that is to be aware of position of the node in the network is an essential part of many sensor network operations and applications. The existing localization algorithms can be classified into two categories: range-based and range-free. The range-based localization algorithm has requirements on hardware, thus is expensive to be implemented in practice. The range-free localization algorithm reduces the hardware cost. Because of the hardware limitations of WSN devices, solutions in range-free localization are being pursued as a cost-effective alternative to more expensive range-based approaches. However, these techniques usually have higher localization error compared to the range-based algorithms. DV-Hop is a typical range-free localization algorithm utilizing hop-distance estimation. In this paper, we propose an improved DV-Hop algorithm based on genetic algorithm. Simulation results show that our proposed algorithm improves the localization accuracy compared with previous algorithms.

  2. Reinforcement Learning: Stochastic Approximation Algorithms for Markov Decision Processes

    OpenAIRE

    Krishnamurthy, Vikram

    2015-01-01

    This article presents a short and concise description of stochastic approximation algorithms in reinforcement learning of Markov decision processes. The algorithms can also be used as a suboptimal method for partially observed Markov decision processes.

  3. SPECIAL LIBRARIES OF FRAGMENTS OF ALGORITHMIC NETWORKS TO AUTOMATE THE DEVELOPMENT OF ALGORITHMIC MODELS

    Directory of Open Access Journals (Sweden)

    V. E. Marley

    2015-01-01

    Full Text Available Summary. The concept of algorithmic models appeared from the algorithmic approach in which the simulated object, the phenomenon appears in the form of process, subject to strict rules of the algorithm, which placed the process of operation of the facility. Under the algorithmic model is the formalized description of the scenario subject specialist for the simulated process, the structure of which is comparable with the structure of the causal and temporal relationships between events of the process being modeled, together with all information necessary for its software implementation. To represent the structure of algorithmic models used algorithmic network. Normally, they were defined as loaded finite directed graph, the vertices which are mapped to operators and arcs are variables, bound by operators. The language of algorithmic networks has great features, the algorithms that it can display indifference the class of all random algorithms. In existing systems, automation modeling based on algorithmic nets, mainly used by operators working with real numbers. Although this reduces their ability, but enough for modeling a wide class of problems related to economy, environment, transport, technical processes. The task of modeling the execution of schedules and network diagrams is relevant and useful. There are many counting systems, network graphs, however, the monitoring process based analysis of gaps and terms of graphs, no analysis of prediction execution schedule or schedules. The library is designed to build similar predictive models. Specifying source data to obtain a set of projections from which to choose one and take it for a new plan.

  4. A Comparison of First-order Algorithms for Machine Learning

    OpenAIRE

    Wei, Yu; Thomas, Pock

    2014-01-01

    Using an optimization algorithm to solve a machine learning problem is one of mainstreams in the field of science. In this work, we demonstrate a comprehensive comparison of some state-of-the-art first-order optimization algorithms for convex optimization problems in machine learning. We concentrate on several smooth and non-smooth machine learning problems with a loss function plus a regularizer. The overall experimental results show the superiority of primal-dual algorithms in solving a mac...

  5. Divide and Conquer Algorithms for Faster Machine Learning

    OpenAIRE

    Izbicki, Michael

    2017-01-01

    This thesis improves the scalability of machine learning by studyingmergeable learning algorithms. In a mergeable algorithm, manyprocessors independently solve the learning problem on small subsetsof the data. Then a master processor merges the solutions togetherwith only a single round of communication. Mergeable algorithms arepopular because they are fast, easy to implement, and have strongprivacy guarantees.Our first contribution is a novel fast cross validation proceduresuitable for any m...

  6. Exploring the Peer Interaction Effects on Learning Achievement in a Social Learning Platform Based on Social Network Analysis

    Science.gov (United States)

    Lin, Yu-Tzu; Chen, Ming-Puu; Chang, Chia-Hu; Chang, Pu-Chen

    2017-01-01

    The benefits of social learning have been recognized by existing research. To explore knowledge distribution in social learning and its effects on learning achievement, we developed a social learning platform and explored students' behaviors of peer interactions by the proposed algorithms based on social network analysis. An empirical study was…

  7. Improved Degree Search Algorithms in Unstructured P2P Networks

    Directory of Open Access Journals (Sweden)

    Guole Liu

    2012-01-01

    Full Text Available Searching and retrieving the demanded correct information is one important problem in networks; especially, designing an efficient search algorithm is a key challenge in unstructured peer-to-peer (P2P networks. Breadth-first search (BFS and depth-first search (DFS are the current two typical search methods. BFS-based algorithms show the perfect performance in the aspect of search success rate of network resources, while bringing the huge search messages. On the contrary, DFS-based algorithms reduce the search message quantity and also cause the dropping of search success ratio. To address the problem that only one of performances is excellent, we propose two memory function degree search algorithms: memory function maximum degree algorithm (MD and memory function preference degree algorithm (PD. We study their performance including the search success rate and the search message quantity in different networks, which are scale-free networks, random graph networks, and small-world networks. Simulations show that the two performances are both excellent at the same time, and the performances are improved at least 10 times.

  8. A generalized LSTM-like training algorithm for second-order recurrent neural networks.

    Science.gov (United States)

    Monner, Derek; Reggia, James A

    2012-01-01

    The long short term memory (LSTM) is a second-order recurrent neural network architecture that excels at storing sequential short-term memories and retrieving them many time-steps later. LSTM's original training algorithm provides the important properties of spatial and temporal locality, which are missing from other training approaches, at the cost of limiting its applicability to a small set of network architectures. Here we introduce the generalized long short-term memory(LSTM-g) training algorithm, which provides LSTM-like locality while being applicable without modification to a much wider range of second-order network architectures. With LSTM-g, all units have an identical set of operating instructions for both activation and learning, subject only to the configuration of their local environment in the network; this is in contrast to the original LSTM training algorithm, where each type of unit has its own activation and training instructions. When applied to LSTM architectures with peephole connections, LSTM-g takes advantage of an additional source of back-propagated error which can enable better performance than the original algorithm. Enabled by the broad architectural applicability of LSTM-g, we demonstrate that training recurrent networks engineered for specific tasks can produce better results than single-layer networks. We conclude that LSTM-g has the potential to both improve the performance and broaden the applicability of spatially and temporally local gradient-based training algorithms for recurrent neural networks. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. A repair algorithm for radial basis function neural network and its application to chemical oxygen demand modeling.

    Science.gov (United States)

    Qiao, Jun-Fei; Han, Hong-Gui

    2010-02-01

    This paper presents a repair algorithm for the design of a Radial Basis Function (RBF) neural network. The proposed repair RBF (RRBF) algorithm starts from a single prototype randomly initialized in the feature space. The algorithm has two main phases: an architecture learning phase and a parameter adjustment phase. The architecture learning phase uses a repair strategy based on a sensitivity analysis (SA) of the network's output to judge when and where hidden nodes should be added to the network. New nodes are added to repair the architecture when the prototype does not meet the requirements. The parameter adjustment phase uses an adjustment strategy where the capabilities of the network are improved by modifying all the weights. The algorithm is applied to two application areas: approximating a non-linear function, and modeling the key parameter, chemical oxygen demand (COD) used in the waste water treatment process. The results of simulation show that the algorithm provides an efficient solution to both problems.

  10. Energy Aware Clustering Algorithms for Wireless Sensor Networks

    Science.gov (United States)

    Rakhshan, Noushin; Rafsanjani, Marjan Kuchaki; Liu, Chenglian

    2011-09-01

    The sensor nodes deployed in wireless sensor networks (WSNs) are extremely power constrained, so maximizing the lifetime of the entire networks is mainly considered in the design. In wireless sensor networks, hierarchical network structures have the advantage of providing scalable and energy efficient solutions. In this paper, we investigate different clustering algorithms for WSNs and also compare these clustering algorithms based on metrics such as clustering distribution, cluster's load balancing, Cluster Head's (CH) selection strategy, CH's role rotation, node mobility, clusters overlapping, intra-cluster communications, reliability, security and location awareness.

  11. Hybrid Wireless Sensor Network Coverage Holes Restoring Algorithm

    Directory of Open Access Journals (Sweden)

    Liu Zhouzhou

    2016-01-01

    Full Text Available Aiming at the perception hole caused by the necessary movement or failure of nodes in the wireless sensor actuator network, this paper proposed a kind of coverage restoring scheme based on hybrid particle swarm optimization algorithm. The scheme first introduced network coverage based on grids, transformed the coverage restoring problem into unconstrained optimization problem taking the network coverage as the optimization target, and then solved the optimization problem in the use of the hybrid particle swarm optimization algorithm with the idea of simulated annealing. Simulation results show that the probabilistic jumping property of simulated annealing algorithm could make up for the defect that particle swarm optimization algorithm is easy to fall into premature convergence, and the hybrid algorithm can effectively solve the coverage restoring problem.

  12. Algorithms for Finding Small Attractors in Boolean Networks

    Directory of Open Access Journals (Sweden)

    Hayashida Morihiro

    2007-01-01

    Full Text Available A Boolean network is a model used to study the interactions between different genes in genetic regulatory networks. In this paper, we present several algorithms using gene ordering and feedback vertex sets to identify singleton attractors and small attractors in Boolean networks. We analyze the average case time complexities of some of the proposed algorithms. For instance, it is shown that the outdegree-based ordering algorithm for finding singleton attractors works in time for , which is much faster than the naive time algorithm, where is the number of genes and is the maximum indegree. We performed extensive computational experiments on these algorithms, which resulted in good agreement with theoretical results. In contrast, we give a simple and complete proof for showing that finding an attractor with the shortest period is NP-hard.

  13. Evaluation of multilayer perceptron algorithms for an analysis of network flow data

    Science.gov (United States)

    Bieniasz, Jedrzej; Rawski, Mariusz; Skowron, Krzysztof; Trzepiński, Mateusz

    2016-09-01

    The volume of exchanged information through IP networks is larger than ever and still growing. It creates a space for both benign and malicious activities. The second one raises awareness on security network devices, as well as network infrastructure and a system as a whole. One of the basic tools to prevent cyber attacks is Network Instrusion Detection System (NIDS). NIDS could be realized as a signature-based detector or an anomaly-based one. In the last few years the emphasis has been placed on the latter type, because of the possibility of applying smart and intelligent solutions. An ideal NIDS of next generation should be composed of self-learning algorithms that could react on known and unknown malicious network activities respectively. In this paper we evaluated a machine learning approach for detection of anomalies in IP network data represented as NetFlow records. We considered Multilayer Perceptron (MLP) as the classifier and we used two types of learning algorithms - Backpropagation (BP) and Particle Swarm Optimization (PSO). This paper includes a comprehensive survey on determining the most optimal MLP learning algorithm for the classification problem in application to network flow data. The performance, training time and convergence of BP and PSO methods were compared. The results show that PSO algorithm implemented by the authors outperformed other solutions if accuracy of classifications is considered. The major disadvantage of PSO is training time, which could be not acceptable for larger data sets or in real network applications. At the end we compared some key findings with the results from the other papers to show that in all cases results from this study outperformed them.

  14. Use of genetic algorithms for encoding efficient neural network architectures: neurocomputer implementation

    Science.gov (United States)

    James, Jason; Dagli, Cihan H.

    1995-04-01

    In this study an attempt is being made to encode the architecture of a neural network in a chromosome string for evolving robust, fast-learning, minimal neural network architectures through genetic algorithms. Various attributes affecting the learning of the network are represented as genes. The performance of the networks is used as the fitness value. Neural network architecture design concepts are initially demonstrated using a backpropagation architecture with the standard data set of Rosenberg and Sejnowski for text to speech conversion on Adaptive Solutions Inc.'s CNAPS Neuro-Computer. The architectures obtained are compared with the one reported in the literature for the standard data set used. The study concludes by providing some insights regarding the architecture encoding for other artificial neural network paradigms.

  15. Neural Network Algorithm for Prediction of Secondary Protein Structure

    National Research Council Canada - National Science Library

    Zikrija Avdagic; Elvir Purisevic; Emir Buza; Zlatan Coralic

    2009-01-01

    .... In this paper we describe the method and results of using CB513 as a dataset suitable for development of artificial neural network algorithms for prediction of secondary protein structure with MATLAB...

  16. Regular Network Class Features Enhancement Using an Evolutionary Synthesis Algorithm

    Directory of Open Access Journals (Sweden)

    O. G. Monahov

    2014-01-01

    Full Text Available This paper investigates a solution of the optimization problem concerning the construction of diameter-optimal regular networks (graphs. Regular networks are of practical interest as the graph-theoretical models of reliable communication networks of parallel supercomputer systems, as a basis of the structure in a model of small world in optical and neural networks. It presents a new class of parametrically described regular networks - hypercirculant networks (graphs. An approach that uses evolutionary algorithms for the automatic generation of parametric descriptions of optimal hypercirculant networks is developed. Synthesis of optimal hypercirculant networks is based on the optimal circulant networks with smaller degree of nodes. To construct optimal hypercirculant networks is used a template of circulant network from the known optimal families of circulant networks with desired number of nodes and with smaller degree of nodes. Thus, a generating set of the circulant network is used as a generating subset of the hypercirculant network, and the missing generators are synthesized by means of the evolutionary algorithm, which is carrying out minimization of diameter (average diameter of networks. A comparative analysis of the structural characteristics of hypercirculant, toroidal, and circulant networks is conducted. The advantage hypercirculant networks under such structural characteristics, as diameter, average diameter, and the width of bisection, with comparable costs of the number of nodes and the number of connections is demonstrated. It should be noted the advantage of hypercirculant networks of dimension three over four higher-dimensional tori. Thus, the optimization of hypercirculant networks of dimension three is more efficient than the introduction of an additional dimension for the corresponding toroidal structures. The paper also notes the best structural parameters of hypercirculant networks in comparison with iBT-networks previously

  17. An algorithm for link restoration in wavwlength translating networks

    DEFF Research Database (Denmark)

    Limal, Emmanuel; Gliese, Ulrik Bo

    1999-01-01

    We propose the BONRA, a new and innovative algorithm for dynamic allocation of working and spare channel capacity for single link restoration in wavelength translating optical networks. The BONRA has very low calculation complexity yet gives high capacity utilisation.......We propose the BONRA, a new and innovative algorithm for dynamic allocation of working and spare channel capacity for single link restoration in wavelength translating optical networks. The BONRA has very low calculation complexity yet gives high capacity utilisation....

  18. Engineering Algorithms for Route Planning in Multimodal Transportation Networks

    OpenAIRE

    Dibbelt, Julian Matthias

    2016-01-01

    Practical algorithms for route planning in transportation networks are a showpiece of successful Algorithm Engineering. This has produced many speedup techniques, varying in preprocessing time, space, query performance, simplicity, and ease of implementation. This thesis explores solutions to more realistic scenarios, taking into account, e.g., traffic, user preferences, public transit schedules, and the options offered by the many modalities of modern transportation networks.

  19. An algorithm for link restoration of wavelength routing optical networks

    DEFF Research Database (Denmark)

    Limal, Emmanuel; Stubkjær, Kristian

    1999-01-01

    We present an algorithm for restoration of single link failure in wavelength routing multihop optical networks. The algorithm is based on an innovative study of networks using graph theory. It has the following original features: it (i) assigns working and spare channels simultaneously, (ii) prev...... low complexity is studied in detail and compared to the complexity of a classical path assignment algorithm. Finally, we explain how to use the algorithm to control the restoration path lengths.......We present an algorithm for restoration of single link failure in wavelength routing multihop optical networks. The algorithm is based on an innovative study of networks using graph theory. It has the following original features: it (i) assigns working and spare channels simultaneously, (ii......) prevents the search for unacceptable routing paths by pointing out channels required for restoration, (iii) offers a high utilization of the capacity resources and (iv) allows a trivial search for the restoration paths. The algorithm is for link restoration of networks without wavelength translation. Its...

  20. A generalized clustering algorithm for dynamic wireless sensor networks

    NARCIS (Netherlands)

    Marin Perianu, Raluca; Hurink, Johann L.; Hartel, Pieter H.

    We propose a general clustering algorithm for dynamic sensor networks, that makes localized decisions (1-hop neighbourhood) and produces disjoint clusters. The purpose is to extract and emphasise the essential clustering mechanisms common for a set of state-of-the-art algorithms, which allows for a

  1. A Generalized Clustering Algorithm for Dynamic Wireless Sensor Networks

    NARCIS (Netherlands)

    Marin Perianu, Raluca; Hurink, Johann L.; Hartel, Pieter H.

    2008-01-01

    We propose a general clustering algorithm for dynamic sensor networks, that makes localized decisions (1-hop neighbourhood) and produces disjoint clusters. The purpose is to extract and emphasise the essential clustering mechanisms common for a set of state-of-the-art algorithms, which allows for a

  2. Practical Algorithms for Subgroup Detection in Covert Networks

    DEFF Research Database (Denmark)

    Memon, Nasrullah; Wiil, Uffe Kock; Qureshi, Pir Abdul Rasool

    2010-01-01

    In this paper, we present algorithms for subgroup detection and demonstrated them with a real-time case study of USS Cole bombing terrorist network. The algorithms are demonstrated in an application by a prototype system. The system finds associations between terrorist and terrorist organisations...

  3. Acoustic Performance of Exhaust Muffler based Genetic Algorithms and Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Wang Xiao Li

    2013-07-01

    Full Text Available The noise level was one of the important indicators as a measure of the quality and performance of the diesel engine, exhaust noise in diesel engines machine noise accounted for an important proportion of installed performance exhaust mufflerwas an effective way to control exhaust noise. This article using orthogonal test program was to the muffler structure parameters as input to the sound pressure level and diesel fuel each output artificial neural network (BP network learning sample. Matlab artificial neural network toolbox to complete the training of the network, and better noise performance and fuel consumption rate performance muffler internal structure parameters combination was obtained through genetic algorithm gifted collaborative validation of artificial neural networks and genetic algorithms to optimize application exhaust muffler design is entirely feasible

  4. Time series classification using k-Nearest neighbours, Multilayer Perceptron and Learning Vector Quantization algorithms

    Directory of Open Access Journals (Sweden)

    Jiří Fejfar

    2012-01-01

    Full Text Available We are presenting results comparison of three artificial intelligence algorithms in a classification of time series derived from musical excerpts in this paper. Algorithms were chosen to represent different principles of classification – statistic approach, neural networks and competitive learning. The first algorithm is a classical k-Nearest neighbours algorithm, the second algorithm is Multilayer Perceptron (MPL, an example of artificial neural network and the third one is a Learning Vector Quantization (LVQ algorithm representing supervised counterpart to unsupervised Self Organizing Map (SOM.After our own former experiments with unlabelled data we moved forward to the data labels utilization, which generally led to a better accuracy of classification results. As we need huge data set of labelled time series (a priori knowledge of correct class which each time series instance belongs to, we used, with a good experience in former studies, musical excerpts as a source of real-world time series. We are using standard deviation of the sound signal as a descriptor of a musical excerpts volume level.We are describing principle of each algorithm as well as its implementation briefly, giving links for further research. Classification results of each algorithm are presented in a confusion matrix showing numbers of misclassifications and allowing to evaluate overall accuracy of the algorithm. Results are compared and particular misclassifications are discussed for each algorithm. Finally the best solution is chosen and further research goals are given.

  5. Insertion algorithms for network model database management systems

    Science.gov (United States)

    Mamadolimov, Abdurashid; Khikmat, Saburov

    2017-12-01

    The network model is a database model conceived as a flexible way of representing objects and their relationships. Its distinguishing feature is that the schema, viewed as a graph in which object types are nodes and relationship types are arcs, forms partial order. When a database is large and a query comparison is expensive then the efficiency requirement of managing algorithms is minimizing the number of query comparisons. We consider updating operation for network model database management systems. We develop a new sequantial algorithm for updating operation. Also we suggest a distributed version of the algorithm.

  6. New Heuristic Algorithm for Dynamic Traffic in WDM Optical Networks

    Directory of Open Access Journals (Sweden)

    Arturo Benito Rodríguez Garcia

    2015-12-01

    Full Text Available The results and comparison of the simulation of a new heuristic algorithm called Snake One are presented. The comparison is made with three heuristic algorithms, Genetic Algorithms, Simulated Annealing, and Tabu Search, using blocking probability and network utilization as standard indicators. The simulation was made on the WDM NSFNET under dynamic traffic conditions. The results show a substantial decrease of blocking, but this causes a relative growth of network utilization. There are also load intervals at which its performance improves, decreasing the number of blocked requests.

  7. Associative learning in biochemical networks.

    Science.gov (United States)

    Gandhi, Nikhil; Ashkenasy, Gonen; Tannenbaum, Emmanuel

    2007-11-07

    It has been recently suggested that there are likely generic features characterizing the emergence of systems constructed from the self-organization of self-replicating agents acting under one or more selection pressures. Therefore, structures and behaviors at one length scale may be used to infer analogous structures and behaviors at other length scales. Motivated by this suggestion, we seek to characterize various "animate" behaviors in biochemical networks, and the influence that these behaviors have on genomic evolution. Specifically, in this paper, we develop a simple, chemostat-based model illustrating how a process analogous to associative learning can occur in a biochemical network. Associative learning is a form of learning whereby a system "learns" to associate two stimuli with one another. Associative learning, also known as conditioning, is believed to be a powerful learning process at work in the brain (associative learning is essentially "learning by analogy"). In our model, two types of replicating molecules, denoted as A and B, are present in some initial concentration in the chemostat. Molecules A and B are stimulated to replicate by some growth factors, denoted as G(A) and G(B), respectively. It is also assumed that A and B can covalently link, and that the conjugated molecule can be stimulated by either the G(A) or G(B) growth factors (and can be degraded). We show that, if the chemostat is stimulated by both growth factors for a certain time, followed by a time gap during which the chemostat is not stimulated at all, and if the chemostat is then stimulated again by only one of the growth factors, then there will be a transient increase in the number of molecules activated by the other growth factor. Therefore, the chemostat bears the imprint of earlier, simultaneous stimulation with both growth factors, which is indicative of associative learning. It is interesting to note that the dynamics of our model is consistent with certain aspects of

  8. Genetic Algorithm Optimized Neural Networks Ensemble as ...

    African Journals Online (AJOL)

    Improvements in neural network calibration models by a novel approach using neural network ensemble (NNE) for the simultaneous spectrophotometric multicomponent analysis are suggested, with a study on the estimation of the components of an antihypertensive combination, namely, atenolol and losartan potassium.

  9. Wireless Sensor Networks : Structure and Algorithms

    NARCIS (Netherlands)

    van Dijk, T.C.|info:eu-repo/dai/nl/304841293

    2014-01-01

    In this thesis we look at various problems in wireless networking. First we consider two problems in physical-model networks. We introduce a new model for localisation. The model is based on a range-free model of radio transmissions. The first scheme is randomised and we analyse its expected

  10. Blending Formal and Informal Learning Networks for Online Learning

    Science.gov (United States)

    Czerkawski, Betül C.

    2016-01-01

    With the emergence of social software and the advance of web-based technologies, online learning networks provide invaluable opportunities for learning, whether formal or informal. Unlike top-down, instructor-centered, and carefully planned formal learning settings, informal learning networks offer more bottom-up, student-centered participatory…

  11. District Heating Network Design and Configuration Optimization with Genetic Algorithm

    DEFF Research Database (Denmark)

    Li, Hongwei; Svendsen, Svend

    2013-01-01

    and the pipe friction and heat loss formulations are non-linear. In order to find the optimal district heating network configuration, genetic algorithm which handles the mixed integer nonlinear programming problem is chosen. The network configuration is represented with binary and integer encoding...

  12. A Practical Algorithm for Reconstructing Level-1 Phylogenetic Networks

    NARCIS (Netherlands)

    K.T. Huber; L.J.J. van Iersel (Leo); S.M. Kelk (Steven); R. Suchecki

    2010-01-01

    htmlabstractRecently much attention has been devoted to the construction of phylogenetic networks which generalize phylogenetic trees in order to accommodate complex evolutionary processes. Here we present an efficient, practical algorithm for reconstructing level-1 phylogenetic networks - a type of

  13. Neural networks and perceptual learning

    Science.gov (United States)

    Tsodyks, Misha; Gilbert, Charles

    2005-01-01

    Sensory perception is a learned trait. The brain strategies we use to perceive the world are constantly modified by experience. With practice, we subconsciously become better at identifying familiar objects or distinguishing fine details in our environment. Current theoretical models simulate some properties of perceptual learning, but neglect the underlying cortical circuits. Future neural network models must incorporate the top-down alteration of cortical function by expectation or perceptual tasks. These newly found dynamic processes are challenging earlier views of static and feedforward processing of sensory information. PMID:15483598

  14. A computational study of routing algorithms for realistic transportation networks

    Energy Technology Data Exchange (ETDEWEB)

    Jacob, R.; Marathe, M.V.; Nagel, K.

    1998-12-01

    The authors carry out an experimental analysis of a number of shortest path (routing) algorithms investigated in the context of the TRANSIMS (Transportation Analysis and Simulation System) project. The main focus of the paper is to study how various heuristic and exact solutions, associated data structures affected the computational performance of the software developed especially for realistic transportation networks. For this purpose the authors have used Dallas Fort-Worth road network with very high degree of resolution. The following general results are obtained: (1) they discuss and experimentally analyze various one-one shortest path algorithms, which include classical exact algorithms studied in the literature as well as heuristic solutions that are designed to take into account the geometric structure of the input instances; (2) they describe a number of extensions to the basic shortest path algorithm. These extensions were primarily motivated by practical problems arising in TRANSIMS and ITS (Intelligent Transportation Systems) related technologies. Extensions discussed include--(i) time dependent networks, (ii) multi-modal networks, (iii) networks with public transportation and associated schedules. Computational results are provided to empirically compare the efficiency of various algorithms. The studies indicate that a modified Dijkstra`s algorithm is computationally fast and an excellent candidate for use in various transportation planning applications as well as ITS related technologies.

  15. Slow update stochastic simulation algorithms for modeling complex biochemical networks.

    Science.gov (United States)

    Ghosh, Debraj; De, Rajat K

    2017-10-30

    The stochastic simulation algorithm (SSA) based modeling is a well recognized approach to predict the stochastic behavior of biological networks. The stochastic simulation of large complex biochemical networks is a challenge as it takes a large amount of time for simulation due to high update cost. In order to reduce the propensity update cost, we proposed two algorithms: slow update exact stochastic simulation algorithm (SUESSA) and slow update exact sorting stochastic simulation algorithm (SUESSSA). We applied cache-based linear search (CBLS) in these two algorithms for improving the search operation for finding reactions to be executed. Data structure used for incorporating CBLS is very simple and the cost of maintaining this during propensity update operation is very low. Hence, time taken during propensity updates, for simulating strongly coupled networks, is very fast; which leads to reduction of total simulation time. SUESSA and SUESSSA are not only restricted to elementary reactions, they support higher order reactions too. We used linear chain model and colloidal aggregation model to perform a comparative analysis of the performances of our methods with the existing algorithms. We also compared the performances of our methods with the existing ones, for large biochemical networks including B cell receptor and FcϵRI signaling networks. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Robust adaptive learning of feedforward neural networks via LMI optimizations.

    Science.gov (United States)

    Jing, Xingjian

    2012-07-01

    Feedforward neural networks (FNNs) have been extensively applied to various areas such as control, system identification, function approximation, pattern recognition etc. A novel robust control approach to the learning problems of FNNs is further investigated in this study in order to develop efficient learning algorithms which can be implemented with optimal parameter settings and considering noise effect in the data. To this aim, the learning problem of a FNN is cast into a robust output feedback control problem of a discrete time-varying linear dynamic system. New robust learning algorithms with adaptive learning rate are therefore developed, using linear matrix inequality (LMI) techniques to find the appropriate learning rates and to guarantee the fast and robust convergence. Theoretical analysis and examples are given to illustrate the theoretical results. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. An algorithm for learning real-time automata

    NARCIS (Netherlands)

    Verwer, S.E.; De Weerdt, M.M.; Witteveen, C.

    2007-01-01

    We describe an algorithm for learning simple timed automata, known as real-time automata. The transitions of real-time automata can have a temporal constraint on the time of occurrence of the current symbol relative to the previous symbol. The learning algorithm is similar to the redblue fringe

  18. Challenges in the Verification of Reinforcement Learning Algorithms

    Science.gov (United States)

    Van Wesel, Perry; Goodloe, Alwyn E.

    2017-01-01

    Machine learning (ML) is increasingly being applied to a wide array of domains from search engines to autonomous vehicles. These algorithms, however, are notoriously complex and hard to verify. This work looks at the assumptions underlying machine learning algorithms as well as some of the challenges in trying to verify ML algorithms. Furthermore, we focus on the specific challenges of verifying reinforcement learning algorithms. These are highlighted using a specific example. Ultimately, we do not offer a solution to the complex problem of ML verification, but point out possible approaches for verification and interesting research opportunities.

  19. MODIS Aerosol Optical Depth Bias Adjustment Using Machine Learning Algorithms

    Science.gov (United States)

    Albayrak, A.; Wei, J. C.; Petrenko, M.; Lary, D. J.; Leptoukh, G. G.

    2011-12-01

    Over the past decade, global aerosol observations have been conducted by space-borne sensors, airborne instruments, and ground-base network measurements. Unfortunately, quite often we encounter the differences of aerosol measurements by different well-calibrated instruments, even with a careful collocation in time and space. The differences might be rather substantial, and need to be better understood and accounted for when merging data from many sensors. The possible causes for these differences come from instrumental bias, different satellite viewing geometries, calibration issues, dynamically changing atmospheric and the surface conditions, and other "regressors", resulting in random and systematic errors in the final aerosol products. In this study, we will concentrate on the subject of removing biases and the systematic errors from MODIS (both Terra and Aqua) aerosol product, using Machine Learning algorithms. While we are assessing our regressors in our system when comparing global aerosol products, the Aerosol Robotic Network of sun-photometers (AERONET) will be used as a baseline for evaluating the MODIS aerosol products (Dark Target for land and ocean, and Deep Blue retrieval algorithms). The results of bias adjustment for MODIS Terra and Aqua are planned to be incorporated into the AeroStat Giovanni as part of the NASA ACCESS funded AeroStat project.

  20. A clustering algorithm for determining community structure in complex networks

    Science.gov (United States)

    Jin, Hong; Yu, Wei; Li, ShiJun

    2018-02-01

    Clustering algorithms are attractive for the task of community detection in complex networks. DENCLUE is a representative density based clustering algorithm which has a firm mathematical basis and good clustering properties allowing for arbitrarily shaped clusters in high dimensional datasets. However, this method cannot be directly applied to community discovering due to its inability to deal with network data. Moreover, it requires a careful selection of the density parameter and the noise threshold. To solve these issues, a new community detection method is proposed in this paper. First, we use a spectral analysis technique to map the network data into a low dimensional Euclidean Space which can preserve node structural characteristics. Then, DENCLUE is applied to detect the communities in the network. A mathematical method named Sheather-Jones plug-in is chosen to select the density parameter which can describe the intrinsic clustering structure accurately. Moreover, every node on the network is meaningful so there were no noise nodes as a result the noise threshold can be ignored. We test our algorithm on both benchmark and real-life networks, and the results demonstrate the effectiveness of our algorithm over other popularity density based clustering algorithms adopted to community detection.

  1. Optimizing of Passive Optical Network Deployment Using Algorithm with Metrics

    Directory of Open Access Journals (Sweden)

    Tomas Pehnelt

    2017-01-01

    Full Text Available Various approaches and methods are used for designing of optimum deployment of Passive Optical Networks (PON according to selected optimization criteria, such as optimal trenching distance, endpoint attenuation and overall installed fibre length. This article describes the ideas and possibilities for an algorithm with the application of graph algorithms for finding the shortest path from Optical Line Termination to Optical Network Terminal unit. This algorithm uses a combination of different methods for generating of an optimal metric, thus creating the optimized tree topology mainly focused on summary trenching distance. Furthermore, it deals with algorithms for finding an optimal placement of optical splitter with the help of K-Means clustering method and hierarchical clustering technique. The results of the proposed algorithm are compared with existing methods.

  2. ANOMALY DETECTION IN NETWORKING USING HYBRID ARTIFICIAL IMMUNE ALGORITHM

    Directory of Open Access Journals (Sweden)

    D. Amutha Guka

    2012-01-01

    Full Text Available Especially in today’s network scenario, when computers are interconnected through internet, security of an information system is very important issue. Because no system can be absolutely secure, the timely and accurate detection of anomalies is necessary. The main aim of this research paper is to improve the anomaly detection by using Hybrid Artificial Immune Algorithm (HAIA which is based on Artificial Immune Systems (AIS and Genetic Algorithm (GA. In this research work, HAIA approach is used to develop Network Anomaly Detection System (NADS. The detector set is generated by using GA and the anomalies are identified using Negative Selection Algorithm (NSA which is based on AIS. The HAIA algorithm is tested with KDD Cup 99 benchmark dataset. The detection rate is used to measure the effectiveness of the NADS. The results and consistency of the HAIA are compared with earlier approaches and the results are presented. The proposed algorithm gives best results when compared to the earlier approaches.

  3. A Distributed Algorithm for Energy Optimization in Hydraulic Networks

    DEFF Research Database (Denmark)

    Kallesøe, Carsten; Wisniewski, Rafal; Jensen, Tom Nørgaard

    2014-01-01

    An industrial case study in the form of a large-scale hydraulic network underlying a district heating system is considered. A distributed control is developed that minimizes the aggregated electrical energy consumption of the pumps in the network without violating the control demands. The algorithm...... a Plug & Play control system as most commissioning can be done during the manufacture of the pumps. Only information on the graph-structure of the hydraulic network is needed during installation....

  4. Aggregation algorithm towards large-scale Boolean network analysis

    OpenAIRE

    Zhao, Y.; Kim, J.; Filippone, M.

    2013-01-01

    The analysis of large-scale Boolean network dynamics is of great importance in understanding complex phenomena where systems are characterized by a large number of components. The computational cost to reveal the number of attractors and the period of each attractor increases exponentially as the number of nodes in the networks increases. This paper presents an efficient algorithm to find attractors for medium to large-scale networks. This is achieved by analyzing subnetworks within the netwo...

  5. Modeling and forecasting US presidential election using learning algorithms

    Science.gov (United States)

    Zolghadr, Mohammad; Niaki, Seyed Armin Akhavan; Niaki, S. T. A.

    2017-09-01

    The primary objective of this research is to obtain an accurate forecasting model for the US presidential election. To identify a reliable model, artificial neural networks (ANN) and support vector regression (SVR) models are compared based on some specified performance measures. Moreover, six independent variables such as GDP, unemployment rate, the president's approval rate, and others are considered in a stepwise regression to identify significant variables. The president's approval rate is identified as the most significant variable, based on which eight other variables are identified and considered in the model development. Preprocessing methods are applied to prepare the data for the learning algorithms. The proposed procedure significantly increases the accuracy of the model by 50%. The learning algorithms (ANN and SVR) proved to be superior to linear regression based on each method's calculated performance measures. The SVR model is identified as the most accurate model among the other models as this model successfully predicted the outcome of the election in the last three elections (2004, 2008, and 2012). The proposed approach significantly increases the accuracy of the forecast.

  6. Optimized feed-forward neural-network algorithm trained for cyclotron-cavity modeling

    Science.gov (United States)

    Mohamadian, Masoumeh; Afarideh, Hossein; Ghergherehchi, Mitra

    2017-01-01

    The cyclotron cavity presented in this paper is modeled by a feed-forward neural network trained by the authors’ optimized back-propagation (BP) algorithm. The training samples were obtained from simulation results that are for a number of defined situations and parameters and were achieved parametrically using MWS CST software; furthermore, the conventional BP algorithm with different hidden-neuron numbers, structures, and other optimal parameters such as learning rate that are applied for our purpose was also used here. The present study shows that an optimized FFN can be used to estimate the cyclotron-model parameters with an acceptable error function. A neural network trained by an optimized algorithm therefore shows a proper approximation and an acceptable ability regarding the modeling of the proposed structure. The cyclotron-cavity parameter-modeling results demonstrate that an FNN that is trained by the optimized algorithm could be a suitable method for the estimation of the design parameters in this case.

  7. Online Algorithms for Adaptive Optimization in Heterogeneous Delay Tolerant Networks

    Directory of Open Access Journals (Sweden)

    Wissam Chahin

    2013-12-01

    Full Text Available Delay Tolerant Networks (DTNs are an emerging type of networks which do not need a predefined infrastructure. In fact, data forwarding in DTNs relies on the contacts among nodes which may possess different features, radio range, battery consumption and radio interfaces. On the other hand, efficient message delivery under limited resources, e.g., battery or storage, requires to optimize forwarding policies. We tackle optimal forwarding control for a DTN composed of nodes of different types, forming a so-called heterogeneous network. Using our model, we characterize the optimal policies and provide a suitable framework to design a new class of multi-dimensional stochastic approximation algorithms working for heterogeneous DTNs. Crucially, our proposed algorithms drive online the source node to the optimal operating point without requiring explicit estimation of network parameters. A thorough analysis of the convergence properties and stability of our algorithms is presented.

  8. Node-Dependence-Based Dynamic Incentive Algorithm in Opportunistic Networks

    Directory of Open Access Journals (Sweden)

    Ruiyun Yu

    2014-01-01

    Full Text Available Opportunistic networks lack end-to-end paths between source nodes and destination nodes, so the communications are mainly carried out by the “store-carry-forward” strategy. Selfish behaviors of rejecting packet relay requests will severely worsen the network performance. Incentive is an efficient way to reduce selfish behaviors and hence improves the reliability and robustness of the networks. In this paper, we propose the node-dependence-based dynamic gaming incentive (NDI algorithm, which exploits the dynamic repeated gaming to motivate nodes relaying packets for other nodes. The NDI algorithm presents a mechanism of tolerating selfish behaviors of nodes. Reward and punishment methods are also designed based on the node dependence degree. Simulation results show that the NDI algorithm is effective in increasing the delivery ratio and decreasing average latency when there are a lot of selfish nodes in the opportunistic networks.

  9. Real-world experimentation of distributed DSA network algorithms

    DEFF Research Database (Denmark)

    Tonelli, Oscar; Berardinelli, Gilberto; Tavares, Fernando Menezes Leitão

    2013-01-01

    of the available spectrum by nodes in a network, without centralized coordination. While proof-of-concept and statistical validation of such algorithms is typically achieved by using system level simulations, experimental activities are valuable contributions for the investigation of particular aspects......The problem of spectrum scarcity in uncoordinated and/or heterogeneous wireless networks is the key aspect driving the research in the field of flexible management of frequency resources. In particular, distributed dynamic spectrum access (DSA) algorithms enable an efficient sharing...... such as a dynamic propagation environment, human presence impact and terminals mobility. This chapter focuses on the practical aspects related to the real world-experimentation with distributed DSA network algorithms over a testbed network. Challenges and solutions are extensively discussed, from the testbed design...

  10. Community Clustering Algorithm in Complex Networks Based on Microcommunity Fusion

    Directory of Open Access Journals (Sweden)

    Jin Qi

    2015-01-01

    Full Text Available With the further research on physical meaning and digital features of the community structure in complex networks in recent years, the improvement of effectiveness and efficiency of the community mining algorithms in complex networks has become an important subject in this area. This paper puts forward a concept of the microcommunity and gets final mining results of communities through fusing different microcommunities. This paper starts with the basic definition of the network community and applies Expansion to the microcommunity clustering which provides prerequisites for the microcommunity fusion. The proposed algorithm is more efficient and has higher solution quality compared with other similar algorithms through the analysis of test results based on network data set.

  11. Using network properties to evaluate targeted immunization algorithms

    Directory of Open Access Journals (Sweden)

    Bita Shams

    2014-09-01

    Full Text Available Immunization of complex network with minimal or limited budget is a challenging issue for research community. In spite of much literature in network immunization, no comprehensive research has been conducted for evaluation and comparison of immunization algorithms. In this paper, we propose an evaluation framework for immunization algorithms regarding available amount of vaccination resources, goal of immunization program, and time complexity. The evaluation framework is designed based on network topological metrics which is extensible to all epidemic spreading model. Exploiting evaluation framework on well-known targeted immunization algorithms shows that in general, immunization based on PageRank centrality outperforms other targeting strategies in various types of networks, whereas, closeness and eigenvector centrality exhibit the worst case performance.

  12. Properties of healthcare teaming networks as a function of network construction algorithms.

    Science.gov (United States)

    Zand, Martin S; Trayhan, Melissa; Farooq, Samir A; Fucile, Christopher; Ghoshal, Gourab; White, Robert J; Quill, Caroline M; Rosenberg, Alexander; Barbosa, Hugo Serrano; Bush, Kristen; Chafi, Hassan; Boudreau, Timothy

    2017-01-01

    Network models of healthcare systems can be used to examine how providers collaborate, communicate, refer patients to each other, and to map how patients traverse the network of providers. Most healthcare service network models have been constructed from patient claims data, using billing claims to link a patient with a specific provider in time. The data sets can be quite large (106-108 individual claims per year), making standard methods for network construction computationally challenging and thus requiring the use of alternate construction algorithms. While these alternate methods have seen increasing use in generating healthcare networks, there is little to no literature comparing the differences in the structural properties of the generated networks, which as we demonstrate, can be dramatically different. To address this issue, we compared the properties of healthcare networks constructed using different algorithms from 2013 Medicare Part B outpatient claims data. Three different algorithms were compared: binning, sliding frame, and trace-route. Unipartite networks linking either providers or healthcare organizations by shared patients were built using each method. We find that each algorithm produced networks with substantially different topological properties, as reflected by numbers of edges, network density, assortativity, clustering coefficients and other structural measures. Provider networks adhered to a power law, while organization networks were best fit by a power law with exponential cutoff. Censoring networks to exclude edges with less than 11 shared patients, a common de-identification practice for healthcare network data, markedly reduced edge numbers and network density, and greatly altered measures of vertex prominence such as the betweenness centrality. Data analysis identified patterns in the distance patients travel between network providers, and a striking set of teaming relationships between providers in the Northeast United States and

  13. Information Dynamics in Networks: Models and Algorithms

    Science.gov (United States)

    2016-09-13

    ICDCS). 29-JUN-15, Columbus, OH, USA. : , . Value-Based Network Externalities and Optimal Auction Design, Conference on Web and Internet Economics...NAME Total Number: NAME Total Number: PERCENT_SUPPORTEDNAME FTE Equivalent: Total Number: ...... ...... Inventions (DD882) Scientific Progress In...Value-based network externalities and optimal auction design. In Web and Internet Economics - 10th International Conference, WINE 2014, Beijing, China, December 14-17, pages 147–160, 2014. 6

  14. Fast Parallel Algorithms for Graphs and Networks

    Science.gov (United States)

    1987-12-01

    loosing the nth game of badminton to him. Valerie King and .Joel Friedman showed me the wonders of cross-country skiing in Yosemite. Steven Rudich was...2), both W(u) and L(v) have no more than 7s/8 vertices. Let x be some ver- tex. We can describe the history of x throughout the algorithm by a zero

  15. Incremental Centrality Algorithms for Dynamic Network Analysis

    Science.gov (United States)

    2013-08-01

    Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware...run-time of O(m + nlogn) can be achieved by implementing the priority queue using a Fibonacci heap [127]. When Dijsktra’s algorithm is invoked

  16. An Energy Consumption Optimized Clustering Algorithm for Radar Sensor Networks Based on an Ant Colony Algorithm

    Directory of Open Access Journals (Sweden)

    Jiang Ting

    2010-01-01

    Full Text Available We optimize the cluster structure to solve problems such as the uneven energy consumption of the radar sensor nodes and random cluster head selection in the traditional clustering routing algorithm. According to the defined cost function for clusters, we present the clustering algorithm which is based on radio-free space path loss. In addition, we propose the energy and distance pheromones based on the residual energy and aggregation of the radar sensor nodes. According to bionic heuristic algorithm, a new ant colony-based clustering algorithm for radar sensor networks is also proposed. Simulation results show that this algorithm can get a better balance of the energy consumption and then remarkably prolong the lifetime of the radar sensor network.

  17. The guitar chord-generating algorithm based on complex network

    Science.gov (United States)

    Ren, Tao; Wang, Yi-fan; Du, Dan; Liu, Miao-miao; Siddiqi, Awais

    2016-02-01

    This paper aims to generate chords for popular songs automatically based on complex network. Firstly, according to the characteristics of guitar tablature, six chord networks of popular songs by six pop singers are constructed and the properties of all networks are concluded. By analyzing the diverse chord networks, the accompaniment regulations and features are shown, with which the chords can be generated automatically. Secondly, in terms of the characteristics of popular songs, a two-tiered network containing a verse network and a chorus network is constructed. With this network, the verse and chorus can be composed respectively with the random walk algorithm. Thirdly, the musical motif is considered for generating chords, with which the bad chord progressions can be revised. This method can make the accompaniments sound more melodious. Finally, a popular song is chosen for generating chords and the new generated accompaniment sounds better than those done by the composers.

  18. Genetic Algorithm-Based Artificial Neural Network for Voltage Stability Assessment

    Directory of Open Access Journals (Sweden)

    Garima Singh

    2011-01-01

    Full Text Available With the emerging trend of restructuring in the electric power industry, many transmission lines have been forced to operate at almost their full capacities worldwide. Due to this, more incidents of voltage instability and collapse are being observed throughout the world leading to major system breakdowns. To avoid these undesirable incidents, a fast and accurate estimation of voltage stability margin is required. In this paper, genetic algorithm based back propagation neural network (GABPNN has been proposed for voltage stability margin estimation which is an indication of the power system's proximity to voltage collapse. The proposed approach utilizes a hybrid algorithm that integrates genetic algorithm and the back propagation neural network. The proposed algorithm aims to combine the capacity of GAs in avoiding local minima and at the same time fast execution of the BP algorithm. Input features for GABPNN are selected on the basis of angular distance-based clustering technique. The performance of the proposed GABPNN approach has been compared with the most commonly used gradient based BP neural network by estimating the voltage stability margin at different loading conditions in 6-bus and IEEE 30-bus system. GA based neural network learns faster, at the same time it provides more accurate voltage stability margin estimation as compared to that based on BP algorithm. It is found to be suitable for online applications in energy management systems.

  19. Automated training for algorithms that learn from genomic data.

    Science.gov (United States)

    Cilingir, Gokcen; Broschat, Shira L

    2015-01-01

    Supervised machine learning algorithms are used by life scientists for a variety of objectives. Expert-curated public gene and protein databases are major resources for gathering data to train these algorithms. While these data resources are continuously updated, generally, these updates are not incorporated into published machine learning algorithms which thereby can become outdated soon after their introduction. In this paper, we propose a new model of operation for supervised machine learning algorithms that learn from genomic data. By defining these algorithms in a pipeline in which the training data gathering procedure and the learning process are automated, one can create a system that generates a classifier or predictor using information available from public resources. The proposed model is explained using three case studies on SignalP, MemLoci, and ApicoAP in which existing machine learning models are utilized in pipelines. Given that the vast majority of the procedures described for gathering training data can easily be automated, it is possible to transform valuable machine learning algorithms into self-evolving learners that benefit from the ever-changing data available for gene products and to develop new machine learning algorithms that are similarly capable.

  20. Novel maximum-margin training algorithms for supervised neural networks.

    Science.gov (United States)

    Ludwig, Oswaldo; Nunes, Urbano

    2010-06-01

    This paper proposes three novel training methods, two of them based on the backpropagation approach and a third one based on information theory for multilayer perceptron (MLP) binary classifiers. Both backpropagation methods are based on the maximal-margin (MM) principle. The first one, based on the gradient descent with adaptive learning rate algorithm (GDX) and named maximum-margin GDX (MMGDX), directly increases the margin of the MLP output-layer hyperplane. The proposed method jointly optimizes both MLP layers in a single process, backpropagating the gradient of an MM-based objective function, through the output and hidden layers, in order to create a hidden-layer space that enables a higher margin for the output-layer hyperplane, avoiding the testing of many arbitrary kernels, as occurs in case of support vector machine (SVM) training. The proposed MM-based objective function aims to stretch out the margin to its limit. An objective function based on Lp-norm is also proposed in order to take into account the idea of support vectors, however, overcoming the complexity involved in solving a constrained optimization problem, usually in SVM training. In fact, all the training methods proposed in this paper have time and space complexities O(N) while usual SVM training methods have time complexity O(N (3)) and space complexity O(N (2)) , where N is the training-data-set size. The second approach, named minimization of interclass interference (MICI), has an objective function inspired on the Fisher discriminant analysis. Such algorithm aims to create an MLP hidden output where the patterns have a desirable statistical distribution. In both training methods, the maximum area under ROC curve (AUC) is applied as stop criterion. The third approach offers a robust training framework able to take the best of each proposed training method. The main idea is to compose a neural model by using neurons extracted from three other neural networks, each one previously trained by

  1. Improving the Neural GPU Architecture for Algorithm Learning

    OpenAIRE

    Freivalds, Karlis; Liepins, Renars

    2017-01-01

    Algorithm learning is a core problem in artificial intelligence with significant implications on automation level that can be achieved by machines. Recently deep learning methods are emerging for synthesizing an algorithm from its input-output examples, the most successful being the Neural GPU, capable of learning multiplication. We present several improvements to the Neural GPU that substantially reduces training time and improves generalization. We introduce a technique of general applicabi...

  2. Quantum Google algorithm. Construction and application to complex networks

    Science.gov (United States)

    Paparo, G. D.; Müller, M.; Comellas, F.; Martin-Delgado, M. A.

    2014-07-01

    We review the main findings on the ranking capabilities of the recently proposed Quantum PageRank algorithm (G.D. Paparo et al., Sci. Rep. 2, 444 (2012) and G.D. Paparo et al., Sci. Rep. 3, 2773 (2013)) applied to large complex networks. The algorithm has been shown to identify unambiguously the underlying topology of the network and to be capable of clearly highlighting the structure of secondary hubs of networks. Furthermore, it can resolve the degeneracy in importance of the low-lying part of the list of rankings. Examples of applications include real-world instances from the WWW, which typically display a scale-free network structure and models of hierarchical networks. The quantum algorithm has been shown to display an increased stability with respect to a variation of the damping parameter, present in the Google algorithm, and a more clearly pronounced power-law behaviour in the distribution of importance among the nodes, as compared to the classical algorithm.

  3. Performance evaluation of power control algorithms in wireless cellular networks

    Science.gov (United States)

    Temaneh-Nyah, C.; Iita, V.

    2014-10-01

    Power control in a mobile communication network intents to control the transmission power levels in such a way that the required quality of service (QoS) for the users is guaranteed with lowest possible transmission powers. Most of the studies of power control algorithms in the literature are based on some kind of simplified assumptions which leads to compromise in the validity of the results when applied in a real environment. In this paper, a CDMA network was simulated. The real environment was accounted for by defining the analysis area and the network base stations and mobile stations are defined by their geographical coordinates, the mobility of the mobile stations is accounted for. The simulation also allowed for a number of network parameters including the network traffic, and the wireless channel models to be modified. Finally, we present the simulation results of a convergence speed based comparative analysis of three uplink power control algorithms.

  4. Algorithmic and analytical methods in network biology

    OpenAIRE

    Koyutürk, Mehmet

    2010-01-01

    During genomic revolution, algorithmic and analytical methods for organizing, integrating, analyzing, and querying biological sequence data proved invaluable. Today, increasing availability of high-throughput data pertaining functional states of biomolecules, as well as their interactions, enables genome-scale studies of the cell from a systems perspective. The past decade witnessed significant efforts on the development of computational infrastructure for large-scale modeling and analysis of...

  5. Algorithms for Scheduling and Network Problems

    Science.gov (United States)

    1991-09-01

    Baruch Awerbuch while at MIT, and I thank him for serving on my thesis committee as well. My fellow students Cliff Stein and David Williamson have both...least one processor per operation, this can be done in NC using the edge-coloring algorithm of Lev , Pippinger, and Valiant [84]. We can extend this to...scheduling unrelated parallel machines. Mathematical Programming, 46:259-271, 1990. [84] G. F. Lev , N. Pippenger, and L. G. Valiant. A fast parallel

  6. Markov Chain Monte Carlo Bayesian Learning for Neural Networks

    Science.gov (United States)

    Goodrich, Michael S.

    2011-01-01

    Conventional training methods for neural networks involve starting al a random location in the solution space of the network weights, navigating an error hyper surface to reach a minimum, and sometime stochastic based techniques (e.g., genetic algorithms) to avoid entrapment in a local minimum. It is further typically necessary to preprocess the data (e.g., normalization) to keep the training algorithm on course. Conversely, Bayesian based learning is an epistemological approach concerned with formally updating the plausibility of competing candidate hypotheses thereby obtaining a posterior distribution for the network weights conditioned on the available data and a prior distribution. In this paper, we developed a powerful methodology for estimating the full residual uncertainty in network weights and therefore network predictions by using a modified Jeffery's prior combined with a Metropolis Markov Chain Monte Carlo method.

  7. Prediction of Aerodynamic Coefficients for Wind Tunnel Data using a Genetic Algorithm Optimized Neural Network

    Science.gov (United States)

    Rajkumar, T.; Aragon, Cecilia; Bardina, Jorge; Britten, Roy

    2002-01-01

    A fast, reliable way of predicting aerodynamic coefficients is produced using a neural network optimized by a genetic algorithm. Basic aerodynamic coefficients (e.g. lift, drag, pitching moment) are modelled as functions of angle of attack and Mach number. The neural network is first trained on a relatively rich set of data from wind tunnel tests of numerical simulations to learn an overall model. Most of the aerodynamic parameters can be well-fitted using polynomial functions. A new set of data, which can be relatively sparse, is then supplied to the network to produce a new model consistent with the previous model and the new data. Because the new model interpolates realistically between the sparse test data points, it is suitable for use in piloted simulations. The genetic algorithm is used to choose a neural network architecture to give best results, avoiding over-and under-fitting of the test data.

  8. Protein complexes predictions within protein interaction networks using genetic algorithms.

    Science.gov (United States)

    Ramadan, Emad; Naef, Ahmed; Ahmed, Moataz

    2016-07-25

    Protein-protein interaction networks are receiving increased attention due to their importance in understanding life at the cellular level. A major challenge in systems biology is to understand the modular structure of such biological networks. Although clustering techniques have been proposed for clustering protein-protein interaction networks, those techniques suffer from some drawbacks. The application of earlier clustering techniques to protein-protein interaction networks in order to predict protein complexes within the networks does not yield good results due to the small-world and power-law properties of these networks. In this paper, we construct a new clustering algorithm for predicting protein complexes through the use of genetic algorithms. We design an objective function for exclusive clustering and overlapping clustering. We assess the quality of our proposed clustering algorithm using two gold-standard data sets. Our algorithm can identify protein complexes that are significantly enriched in the gold-standard data sets. Furthermore, our method surpasses three competing methods: MCL, ClusterOne, and MCODE in terms of the quality of the predicted complexes. The source code and accompanying examples are freely available at http://faculty.kfupm.edu.sa/ics/eramadan/GACluster.zip .

  9. Datasets for radiation network algorithm development and testing

    Energy Technology Data Exchange (ETDEWEB)

    Rao, Nageswara S [ORNL; Sen, Satyabrata [ORNL; Berry, M. L.. [New Jersey Institute of Technology; Wu, Qishi [University of Memphis; Grieme, M. [New Jersey Institute of Technology; Brooks, Richard R [ORNL; Cordone, G. [Clemson University

    2016-01-01

    Domestic Nuclear Detection Office s (DNDO) Intelligence Radiation Sensors Systems (IRSS) program supported the development of networks of commercial-off-the-shelf (COTS) radiation counters for detecting, localizing, and identifying low-level radiation sources. Under this program, a series of indoor and outdoor tests were conducted with multiple source strengths and types, different background profiles, and various types of source and detector movements. Following the tests, network algorithms were replayed in various re-constructed scenarios using sub-networks. These measurements and algorithm traces together provide a rich collection of highly valuable datasets for testing the current and next generation radiation network algorithms, including the ones (to be) developed by broader R&D communities such as distributed detection, information fusion, and sensor networks. From this multiple TeraByte IRSS database, we distilled out and packaged the first batch of canonical datasets for public release. They include measurements from ten indoor and two outdoor tests which represent increasingly challenging baseline scenarios for robustly testing radiation network algorithms.

  10. Experienced Gray Wolf Optimization Through Reinforcement Learning and Neural Networks.

    Science.gov (United States)

    Emary, E; Zawbaa, Hossam M; Grosan, Crina

    2017-01-10

    In this paper, a variant of gray wolf optimization (GWO) that uses reinforcement learning principles combined with neural networks to enhance the performance is proposed. The aim is to overcome, by reinforced learning, the common challenge of setting the right parameters for the algorithm. In GWO, a single parameter is used to control the exploration/exploitation rate, which influences the performance of the algorithm. Rather than using a global way to change this parameter for all the agents, we use reinforcement learning to set it on an individual basis. The adaptation of the exploration rate for each agent depends on the agent's own experience and the current terrain of the search space. In order to achieve this, experience repository is built based on the neural network to map a set of agents' states to a set of corresponding actions that specifically influence the exploration rate. The experience repository is updated by all the search agents to reflect experience and to enhance the future actions continuously. The resulted algorithm is called experienced GWO (EGWO) and its performance is assessed on solving feature selection problems and on finding optimal weights for neural networks algorithm. We use a set of performance indicators to evaluate the efficiency of the method. Results over various data sets demonstrate an advance of the EGWO over the original GWO and over other metaheuristics, such as genetic algorithms and particle swarm optimization.

  11. An Adaptive Filtering Algorithm Based on Genetic Algorithm-Backpropagation Network

    Directory of Open Access Journals (Sweden)

    Kai Hu

    2013-01-01

    Full Text Available A new image filtering algorithm is proposed. GA-BPN algorithm uses genetic algorithm (GA to decide weights in a back propagation neural network (BPN. It has better global optimal characteristics than traditional optimal algorithm. In this paper, we used GA-BPN to do image noise filter researching work. Firstly, this paper uses training samples to train GA-BPN as the noise detector. Then, we utilize the well-trained GA-BPN to recognize noise pixels in target image. And at last, an adaptive weighted average algorithm is used to recover noise pixels recognized by GA-BPN. Experiment data shows that this algorithm has better performance than other filters.

  12. On the use of harmony search algorithm in the training of wavelet neural networks

    Science.gov (United States)

    Lai, Kee Huong; Zainuddin, Zarita; Ong, Pauline

    2015-10-01

    Wavelet neural networks (WNNs) are a class of feedforward neural networks that have been used in a wide range of industrial and engineering applications to model the complex relationships between the given inputs and outputs. The training of WNNs involves the configuration of the weight values between neurons. The backpropagation training algorithm, which is a gradient-descent method, can be used for this training purpose. Nonetheless, the solutions found by this algorithm often get trapped at local minima. In this paper, a harmony search-based algorithm is proposed for the training of WNNs. The training of WNNs, thus can be formulated as a continuous optimization problem, where the objective is to maximize the overall classification accuracy. Each candidate solution proposed by the harmony search algorithm represents a specific WNN architecture. In order to speed up the training process, the solution space is divided into disjoint partitions during the random initialization step of harmony search algorithm. The proposed training algorithm is tested onthree benchmark problems from the UCI machine learning repository, as well as one real life application, namely, the classification of electroencephalography signals in the task of epileptic seizure detection. The results obtained show that the proposed algorithm outperforms the traditional harmony search algorithm in terms of overall classification accuracy.

  13. Supervised learning of probability distributions by neural networks

    Science.gov (United States)

    Baum, Eric B.; Wilczek, Frank

    1988-01-01

    Supervised learning algorithms for feedforward neural networks are investigated analytically. The back-propagation algorithm described by Werbos (1974), Parker (1985), and Rumelhart et al. (1986) is generalized by redefining the values of the input and output neurons as probabilities. The synaptic weights are then varied to follow gradients in the logarithm of likelihood rather than in the error. This modification is shown to provide a more rigorous theoretical basis for the algorithm and to permit more accurate predictions. A typical application involving a medical-diagnosis expert system is discussed.

  14. Block Least Mean Squares Algorithm over Distributed Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    T. Panigrahi

    2012-01-01

    Full Text Available In a distributed parameter estimation problem, during each sampling instant, a typical sensor node communicates its estimate either by the diffusion algorithm or by the incremental algorithm. Both these conventional distributed algorithms involve significant communication overheads and, consequently, defeat the basic purpose of wireless sensor networks. In the present paper, we therefore propose two new distributed algorithms, namely, block diffusion least mean square (BDLMS and block incremental least mean square (BILMS by extending the concept of block adaptive filtering techniques to the distributed adaptation scenario. The performance analysis of the proposed BDLMS and BILMS algorithms has been carried out and found to have similar performances to those offered by conventional diffusion LMS and incremental LMS algorithms, respectively. The convergence analyses of the proposed algorithms obtained from the simulation study are also found to be in agreement with the theoretical analysis. The remarkable and interesting aspect of the proposed block-based algorithms is that their communication overheads per node and latencies are less than those of the conventional algorithms by a factor as high as the block size used in the algorithms.

  15. Human resource recommendation algorithm based on ensemble learning and Spark

    Science.gov (United States)

    Cong, Zihan; Zhang, Xingming; Wang, Haoxiang; Xu, Hongjie

    2017-08-01

    Aiming at the problem of “information overload” in the human resources industry, this paper proposes a human resource recommendation algorithm based on Ensemble Learning. The algorithm considers the characteristics and behaviours of both job seeker and job features in the real business circumstance. Firstly, the algorithm uses two ensemble learning methods-Bagging and Boosting. The outputs from both learning methods are then merged to form user interest model. Based on user interest model, job recommendation can be extracted for users. The algorithm is implemented as a parallelized recommendation system on Spark. A set of experiments have been done and analysed. The proposed algorithm achieves significant improvement in accuracy, recall rate and coverage, compared with recommendation algorithms such as UserCF and ItemCF.

  16. Thermodynamic efficiency of learning a rule in neural networks

    Science.gov (United States)

    Goldt, Sebastian; Seifert, Udo

    2017-11-01

    Biological systems have to build models from their sensory input data that allow them to efficiently process previously unseen inputs. Here, we study a neural network learning a binary classification rule for these inputs from examples provided by a teacher. We analyse the ability of the network to apply the rule to new inputs, that is to generalise from past experience. Using stochastic thermodynamics, we show that the thermodynamic costs of the learning process provide an upper bound on the amount of information that the network is able to learn from its teacher for both batch and online learning. This allows us to introduce a thermodynamic efficiency of learning. We analytically compute the dynamics and the efficiency of a noisy neural network performing online learning in the thermodynamic limit. In particular, we analyse three popular learning algorithms, namely Hebbian, Perceptron and AdaTron learning. Our work extends the methods of stochastic thermodynamics to a new type of learning problem and might form a suitable basis for investigating the thermodynamics of decision-making.

  17. Modeling gene regulatory networks: A network simplification algorithm

    Science.gov (United States)

    Ferreira, Luiz Henrique O.; de Castro, Maria Clicia S.; da Silva, Fabricio A. B.

    2016-12-01

    Boolean networks have been used for some time to model Gene Regulatory Networks (GRNs), which describe cell functions. Those models can help biologists to make predictions, prognosis and even specialized treatment when some disturb on the GRN lead to a sick condition. However, the amount of information related to a GRN can be huge, making the task of inferring its boolean network representation quite a challenge. The method shown here takes into account information about the interactome to build a network, where each node represents a protein, and uses the entropy of each node as a key to reduce the size of the network, allowing the further inferring process to focus only on the main protein hubs, the ones with most potential to interfere in overall network behavior.

  18. Investigating the performance of neural network backpropagation algorithms for TEC estimations using South African GPS data

    Directory of Open Access Journals (Sweden)

    J. B. Habarulema

    2012-05-01

    Full Text Available In this work, results obtained by investigating the application of different neural network backpropagation training algorithms are presented. This was done to assess the performance accuracy of each training algorithm in total electron content (TEC estimations using identical datasets in models development and verification processes. Investigated training algorithms are standard backpropagation (SBP, backpropagation with weight delay (BPWD, backpropagation with momentum (BPM term, backpropagation with chunkwise weight update (BPC and backpropagation for batch (BPB training. These five algorithms are inbuilt functions within the Stuttgart Neural Network Simulator (SNNS and the main objective was to find out the training algorithm that generates the minimum error between the TEC derived from Global Positioning System (GPS observations and the modelled TEC data. Another investigated algorithm is the MatLab based Levenberg-Marquardt backpropagation (L-MBP, which achieves convergence after the least number of iterations during training. In this paper, neural network (NN models were developed using hourly TEC data (for 8 years: 2000–2007 derived from GPS observations over a receiver station located at Sutherland (SUTH (32.38° S, 20.81° E, South Africa. Verification of the NN models for all algorithms considered was performed on both "seen" and "unseen" data. Hourly TEC values over SUTH for 2003 formed the "seen" dataset. The "unseen" dataset consisted of hourly TEC data for 2002 and 2008 over Cape Town (CPTN (33.95° S, 18.47° E and SUTH, respectively. The models' verification showed that all algorithms investigated provide comparable results statistically, but differ significantly in terms of time required to achieve convergence during input-output data training/learning. This paper therefore provides a guide to neural network users for choosing appropriate algorithms based on the availability of computation capabilities used for research.

  19. Investigating the performance of neural network backpropagation algorithms for TEC estimations using South African GPS data

    Directory of Open Access Journals (Sweden)

    J. B. Habarulema

    2012-05-01

    Full Text Available In this work, results obtained by investigating the application of different neural network backpropagation training algorithms are presented. This was done to assess the performance accuracy of each training algorithm in total electron content (TEC estimations using identical datasets in models development and verification processes. Investigated training algorithms are standard backpropagation (SBP, backpropagation with weight delay (BPWD, backpropagation with momentum (BPM term, backpropagation with chunkwise weight update (BPC and backpropagation for batch (BPB training. These five algorithms are inbuilt functions within the Stuttgart Neural Network Simulator (SNNS and the main objective was to find out the training algorithm that generates the minimum error between the TEC derived from Global Positioning System (GPS observations and the modelled TEC data. Another investigated algorithm is the MatLab based Levenberg-Marquardt backpropagation (L-MBP, which achieves convergence after the least number of iterations during training. In this paper, neural network (NN models were developed using hourly TEC data (for 8 years: 2000–2007 derived from GPS observations over a receiver station located at Sutherland (SUTH (32.38° S, 20.81° E, South Africa. Verification of the NN models for all algorithms considered was performed on both "seen" and "unseen" data. Hourly TEC values over SUTH for 2003 formed the "seen" dataset. The "unseen" dataset consisted of hourly TEC data for 2002 and 2008 over Cape Town (CPTN (33.95° S, 18.47° E and SUTH, respectively. The models' verification showed that all algorithms investigated provide comparable results statistically, but differ significantly in terms of time required to achieve convergence during input-output data training/learning. This paper therefore provides a guide to neural network users for choosing appropriate algorithms based on the availability of computation capabilities used for research.

  20. Algorithmic Complexity and Reprogrammability of Chemical Structure Networks

    KAUST Repository

    Zenil, Hector

    2018-02-16

    Here we address the challenge of profiling causal properties and tracking the transformation of chemical compounds from an algorithmic perspective. We explore the potential of applying a computational interventional calculus based on the principles of algorithmic probability to chemical structure networks. We profile the sensitivity of the elements and covalent bonds in a chemical structure network algorithmically, asking whether reprogrammability affords information about thermodynamic and chemical processes involved in the transformation of different compound classes. We arrive at numerical results suggesting a correspondence between some physical, structural and functional properties. Our methods are capable of separating chemical classes that reflect functional and natural differences without considering any information about atomic and molecular properties. We conclude that these methods, with their links to chemoinformatics via algorithmic, probability hold promise for future research.

  1. Algorithm for queueing networks with multi-rate traffic

    DEFF Research Database (Denmark)

    Iversen, Villy Bæk; Ko, King-Tim

    2011-01-01

    In this paper we present a new algorithm for evaluating queueing networks with multi-rate traffic. The detailed state space of a node is evaluated by explicit formulæ. We consider reversible nodes with multi-rate traffic and find the state probabilities by taking advantage of local balance. Theory...... is reversibility which implies that the arrival process and departure process are identical processes, for example state-dependent Poisson processes. This property is equivalent to reversibility. Due to product form, an open network with multi-rate traffic is easy to evaluate by convolution algorithms because...... the nodes behave as independent nodes. For closed queueing networks with multiple servers in every node and multi-rate services we may apply multidimensional convolution algorithm to aggregate the nodes so that we end up with two nodes, the aggregated node and a single node, for which we can calculate...

  2. Secondary Structure Prediction of Protein using Resilient Back Propagation Learning Algorithm

    Directory of Open Access Journals (Sweden)

    Jyotshna Dongardive

    2015-12-01

    Full Text Available The paper proposes a neural network based approach to predict secondary structure of protein. It uses Multilayer Feed Forward Network (MLFN with resilient back propagation as the learning algorithm. Point Accepted Mutation (PAM is adopted as the encoding scheme and CB396 data set is used for the training and testing of the network. Overall accuracy of the network has been experimentally calculated with different window sizes for the sliding window scheme and by varying the number of units in the hidden layer. The best results were obtained with eleven as the window size and seven as the number of units in the hidden layer.

  3. Models and algorithms for biomolecules and molecular networks

    CERN Document Server

    DasGupta, Bhaskar

    2016-01-01

    By providing expositions to modeling principles, theories, computational solutions, and open problems, this reference presents a full scope on relevant biological phenomena, modeling frameworks, technical challenges, and algorithms. * Up-to-date developments of structures of biomolecules, systems biology, advanced models, and algorithms * Sampling techniques for estimating evolutionary rates and generating molecular structures * Accurate computation of probability landscape of stochastic networks, solving discrete chemical master equations * End-of-chapter exercises

  4. Unsupervised algorithms for intrusion detection and identification in wireless ad hoc sensor networks

    Science.gov (United States)

    Hortos, William S.

    2009-05-01

    In previous work by the author, parameters across network protocol layers were selected as features in supervised algorithms that detect and identify certain intrusion attacks on wireless ad hoc sensor networks (WSNs) carrying multisensor data. The algorithms improved the residual performance of the intrusion prevention measures provided by any dynamic key-management schemes and trust models implemented among network nodes. The approach of this paper does not train algorithms on the signature of known attack traffic, but, instead, the approach is based on unsupervised anomaly detection techniques that learn the signature of normal network traffic. Unsupervised learning does not require the data to be labeled or to be purely of one type, i.e., normal or attack traffic. The approach can be augmented to add any security attributes and quantified trust levels, established during data exchanges among nodes, to the set of cross-layer features from the WSN protocols. A two-stage framework is introduced for the security algorithms to overcome the problems of input size and resource constraints. The first stage is an unsupervised clustering algorithm which reduces the payload of network data packets to a tractable size. The second stage is a traditional anomaly detection algorithm based on a variation of support vector machines (SVMs), whose efficiency is improved by the availability of data in the packet payload. In the first stage, selected algorithms are adapted to WSN platforms to meet system requirements for simple parallel distributed computation, distributed storage and data robustness. A set of mobile software agents, acting like an ant colony in securing the WSN, are distributed at the nodes to implement the algorithms. The agents move among the layers involved in the network response to the intrusions at each active node and trustworthy neighborhood, collecting parametric values and executing assigned decision tasks. This minimizes the need to move large amounts

  5. Regularized negative correlation learning for neural network ensembles.

    Science.gov (United States)

    Chen, Huanhuan; Yao, Xin

    2009-12-01

    Negative correlation learning (NCL) is a neural network ensemble learning algorithm that introduces a correlation penalty term to the cost function of each individual network so that each neural network minimizes its mean square error (MSE) together with the correlation of the ensemble. This paper analyzes NCL and reveals that the training of NCL (when lambda = 1) corresponds to training the entire ensemble as a single learning machine that only minimizes the MSE without regularization. This analysis explains the reason why NCL is prone to overfitting the noise in the training set. This paper also demonstrates that tuning the correlation parameter lambda in NCL by cross validation cannot overcome the overfitting problem. The paper analyzes this problem and proposes the regularized negative correlation learning (RNCL) algorithm which incorporates an additional regularization term for the whole ensemble. RNCL decomposes the ensemble's training objectives, including MSE and regularization, into a set of sub-objectives, and each sub-objective is implemented by an individual neural network. In this paper, we also provide a Bayesian interpretation for RNCL and provide an automatic algorithm to optimize regularization parameters based on Bayesian inference. The RNCL formulation is applicable to any nonlinear estimator minimizing the MSE. The experiments on synthetic as well as real-world data sets demonstrate that RNCL achieves better performance than NCL, especially when the noise level is nontrivial in the data set.

  6. Fast, Distributed Algorithms in Deep Networks

    Science.gov (United States)

    2016-05-11

    dataset consists of images of house numbers taken from the Google Streetview car . Each data point consisted of a cropped image of a single digit which...1989. [9] Navneet Dalal and Bill Triggs. Histograms of oriented gradients for human detection. In Computer Vision and Pattern Recognition, volume 1...Bengio. Understanding the difficult of training deep feed- forward neural networks. In International Conference on Artificial Intelligence and Statistics

  7. Quantum-based algorithm for optimizing artificial neural networks.

    Science.gov (United States)

    Tzyy-Chyang Lu; Gwo-Ruey Yu; Jyh-Ching Juang

    2013-08-01

    This paper presents a quantum-based algorithm for evolving artificial neural networks (ANNs). The aim is to design an ANN with few connections and high classification performance by simultaneously optimizing the network structure and the connection weights. Unlike most previous studies, the proposed algorithm uses quantum bit representation to codify the network. As a result, the connectivity bits do not indicate the actual links but the probability of the existence of the connections, thus alleviating mapping problems and reducing the risk of throwing away a potential candidate. In addition, in the proposed model, each weight space is decomposed into subspaces in terms of quantum bits. Thus, the algorithm performs a region by region exploration, and evolves gradually to find promising subspaces for further exploitation. This is helpful to provide a set of appropriate weights when evolving the network structure and to alleviate the noisy fitness evaluation problem. The proposed model is tested on four benchmark problems, namely breast cancer and iris, heart, and diabetes problems. The experimental results show that the proposed algorithm can produce compact ANN structures with good generalization ability compared to other algorithms.

  8. Evaluation of Topology-Aware Broadcast Algorithms for Dragonfly Networks

    Energy Technology Data Exchange (ETDEWEB)

    Dorier, Matthieu; Mubarak, Misbah; Ross, Rob; Li, Jianping Kelvin; Carothers, Christopher D.; Ma, Kwan-Liu

    2016-09-12

    Two-tiered direct network topologies such as Dragonflies have been proposed for future post-petascale and exascale machines, since they provide a high-radix, low-diameter, fast interconnection network. Such topologies call for redesigning MPI collective communication algorithms in order to attain the best performance. Yet as increasingly more applications share a machine, it is not clear how these topology-aware algorithms will react to interference with concurrent jobs accessing the same network. In this paper, we study three topology-aware broadcast algorithms, including one designed by ourselves. We evaluate their performance through event-driven simulation for small- and large-sized broadcasts (in terms of both data size and number of processes). We study the effect of different routing mechanisms on the topology-aware collective algorithms, as well as their sensitivity to network contention with other jobs. Our results show that while topology-aware algorithms dramatically reduce link utilization, their advantage in terms of latency is more limited.

  9. Multiview Community Discovery Algorithm via Nonnegative Factorization Matrix in Heterogeneous Networks

    Directory of Open Access Journals (Sweden)

    Wang Tao

    2017-01-01

    Full Text Available With the rapid development of the Internet and communication technologies, a large number of multimode or multidimensional networks widely emerge in real-world applications. Traditional community detection methods usually focus on homogeneous networks and simply treat different modes of nodes and connections in the same way, thus ignoring the inherent complexity and diversity of heterogeneous networks. It is challenging to effectively integrate the multiple modes of network information to discover the hidden community structure underlying heterogeneous interactions. In our work, a joint nonnegative matrix factorization (Joint-NMF algorithm is proposed to discover the complex structure in heterogeneous networks. Our method transforms the heterogeneous dataset into a series of bipartite graphs correlated. Taking inspiration from the multiview method, we extend the semisupervised learning from single graph to several bipartite graphs with multiple views. In this way, it provides mutual information between different bipartite graphs to realize the collaborative learning of different classifiers, thus comprehensively considers the internal structure of all bipartite graphs, and makes all the classifiers tend to reach a consensus on the clustering results of the target-mode nodes. The experimental results show that Joint-NMF algorithm is efficient and well-behaved in real-world heterogeneous networks and can better explore the community structure of multimode nodes in heterogeneous networks.

  10. A source location algorithm of lightning detection networks in China

    Directory of Open Access Journals (Sweden)

    Z. X. Hu

    2010-10-01

    Full Text Available Fast and accurate retrieval of lightning sources is crucial to the early warning and quick repairs of lightning disaster. An algorithm for computing the location and onset time of cloud-to-ground lightning using the time-of-arrival (TOA and azimuth-of-arrival (AOA data is introduced in this paper. The algorithm can iteratively calculate the least-squares solution of a lightning source on an oblate spheroidal Earth. It contains a set of unique formulas to compute the geodesic distance and azimuth and an explicit method to compute the initial position using TOA data of only three sensors. Since the method accounts for the effects of the oblateness of the Earth, it would provide a more accurate solution than algorithms based on planar or spherical surface models. Numerical simulations are presented to test this algorithm and evaluate the performance of a lightning detection network in the Hubei province of China. Since 1990s, the proposed algorithm has been used in many regional lightning detection networks installed by the electric power system in China. It is expected that the proposed algorithm be used in more lightning detection networks and other location systems.

  11. Algorithm for queueing networks with multi-rate traffic

    DEFF Research Database (Denmark)

    Iversen, Villy Bæk; King-Tim, Ko

    2011-01-01

    In this paper we present a new algorithm for evaluating queueing networks with multi-rate traffic. The detailed state space of a node is evaluated by explicit formulæ. We consider reversible nodes with multi-rate traffic and find the state probabilities by taking advantage of local balance. Theory...... is reversibility which implies that the arrival process and departure process are identical processes, for example state-dependent Poisson processes. This property is equivalent to reversibility. Due to product form, an open network with multi-rate traffic is easy to evaluate by convolution algorithms because...

  12. Machine learning of network metrics in ATLAS Distributed Data Management

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00218873; The ATLAS collaboration; Toler, Wesley; Vamosi, Ralf; Bogado Garcia, Joaquin Ignacio

    2017-01-01

    The increasing volume of physics data poses a critical challenge to the ATLAS experiment. In anticipation of high luminosity physics, automation of everyday data management tasks has become necessary. Previously many of these tasks required human decision-making and operation. Recent advances in hardware and software have made it possible to entrust more complicated duties to automated systems using models trained by machine learning algorithms. In this contribution we show results from one of our ongoing automation efforts that focuses on network metrics. First, we describe our machine learning framework built atop the ATLAS Analytics Platform. This framework can automatically extract and aggregate data, train models with various machine learning algorithms, and eventually score the resulting models and parameters. Second, we use these models to forecast metrics relevant for network-aware job scheduling and data brokering. We show the characteristics of the data and evaluate the forecasting accuracy of our m...

  13. An Optimal Routing Algorithm in Service Customized 5G Networks

    Directory of Open Access Journals (Sweden)

    Haipeng Yao

    2016-01-01

    Full Text Available With the widespread use of Internet, the scale of mobile data traffic grows explosively, which makes 5G networks in cellular networks become a growing concern. Recently, the ideas related to future network, for example, Software Defined Networking (SDN, Content-Centric Networking (CCN, and Big Data, have drawn more and more attention. In this paper, we propose a service-customized 5G network architecture by introducing the ideas of separation between control plane and data plane, in-network caching, and Big Data processing and analysis to resolve the problems traditional cellular radio networks face. Moreover, we design an optimal routing algorithm for this architecture, which can minimize average response hops in the network. Simulation results reveal that, by introducing the cache, the network performance can be obviously improved in different network conditions compared to the scenario without a cache. In addition, we explore the change of cache hit rate and average response hops under different cache replacement policies, cache sizes, content popularity, and network topologies, respectively.

  14. Iterative reconstruction of transcriptional regulatory networks: an algorithmic approach.

    Directory of Open Access Journals (Sweden)

    Christian L Barrett

    2006-05-01

    Full Text Available The number of complete, publicly available genome sequences is now greater than 200, and this number is expected to rapidly grow in the near future as metagenomic and environmental sequencing efforts escalate and the cost of sequencing drops. In order to make use of this data for understanding particular organisms and for discerning general principles about how organisms function, it will be necessary to reconstruct their various biochemical reaction networks. Principal among these will be transcriptional regulatory networks. Given the physical and logical complexity of these networks, the various sources of (often noisy data that can be utilized for their elucidation, the monetary costs involved, and the huge number of potential experiments approximately 10(12 that can be performed, experiment design algorithms will be necessary for synthesizing the various computational and experimental data to maximize the efficiency of regulatory network reconstruction. This paper presents an algorithm for experimental design to systematically and efficiently reconstruct transcriptional regulatory networks. It is meant to be applied iteratively in conjunction with an experimental laboratory component. The algorithm is presented here in the context of reconstructing transcriptional regulation for metabolism in Escherichia coli, and, through a retrospective analysis with previously performed experiments, we show that the produced experiment designs conform to how a human would design experiments. The algorithm is able to utilize probability estimates based on a wide range of computational and experimental sources to suggest experiments with the highest potential of discovering the greatest amount of new regulatory knowledge.

  15. Adaptive Neural Network Nonparametric Identifier With Normalized Learning Laws.

    Science.gov (United States)

    Chairez, Isaac

    2017-05-01

    This paper addresses the design of a normalized convergent learning law for neural networks (NNs) with continuous dynamics. The NN is used here to obtain a nonparametric model for uncertain systems described by a set of ordinary differential equations. The source of uncertainties is the presence of some external perturbations and poor knowledge of the nonlinear function describing the system dynamics. A new adaptive algorithm based on normalized algorithms was used to adjust the weights of the NN. The adaptive algorithm was derived by means of a nonstandard logarithmic Lyapunov function (LLF). Two identifiers were designed using two variations of LLFs leading to a normalized learning law for the first identifier and a variable gain normalized learning law. In the case of the second identifier, the inclusion of normalized learning laws yields to reduce the size of the convergence region obtained as solution of the practical stability analysis. On the other hand, the velocity of convergence for the learning laws depends on the norm of errors in inverse form. This fact avoids the peaking transient behavior in the time evolution of weights that accelerates the convergence of identification error. A numerical example demonstrates the improvements achieved by the algorithm introduced in this paper compared with classical schemes with no-normalized continuous learning methods. A comparison of the identification performance achieved by the no-normalized identifier and the ones developed in this paper shows the benefits of the learning law proposed in this paper.

  16. Hybrid Algorithms for Fuzzy Reverse Supply Chain Network Design

    Directory of Open Access Journals (Sweden)

    Z. H. Che

    2014-01-01

    Full Text Available In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA, Genetic Algorithm-Simulated Annealing (GA-SA, and Particle Swarm Optimization-Simulated Annealing (PSO-SA for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods.

  17. Hybrid Algorithms for Fuzzy Reverse Supply Chain Network Design

    Science.gov (United States)

    Che, Z. H.; Chiang, Tzu-An; Kuo, Y. C.

    2014-01-01

    In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods. PMID:24892057

  18. Hybrid algorithms for fuzzy reverse supply chain network design.

    Science.gov (United States)

    Che, Z H; Chiang, Tzu-An; Kuo, Y C; Cui, Zhihua

    2014-01-01

    In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods.

  19. Comparison and evaluation of network clustering algorithms applied to genetic interaction networks.

    Science.gov (United States)

    Hou, Lin; Wang, Lin; Berg, Arthur; Qian, Minping; Zhu, Yunping; Li, Fangting; Deng, Minghua

    2012-01-01

    The goal of network clustering algorithms detect dense clusters in a network, and provide a first step towards the understanding of large scale biological networks. With numerous recent advances in biotechnologies, large-scale genetic interactions are widely available, but there is a limited understanding of which clustering algorithms may be most effective. In order to address this problem, we conducted a systematic study to compare and evaluate six clustering algorithms in analyzing genetic interaction networks, and investigated influencing factors in choosing algorithms. The algorithms considered in this comparison include hierarchical clustering, topological overlap matrix, bi-clustering, Markov clustering, Bayesian discriminant analysis based community detection, and variational Bayes approach to modularity. Both experimentally identified and synthetically constructed networks were used in this comparison. The accuracy of the algorithms is measured by the Jaccard index in comparing predicted gene modules with benchmark gene sets. The results suggest that the choice differs according to the network topology and evaluation criteria. Hierarchical clustering showed to be best at predicting protein complexes; Bayesian discriminant analysis based community detection proved best under epistatic miniarray profile (EMAP) datasets; the variational Bayes approach to modularity was noticeably better than the other algorithms in the genome-scale networks.

  20. GEOLOGICAL MAPPING USING MACHINE LEARNING ALGORITHMS

    Directory of Open Access Journals (Sweden)

    A. S. Harvey

    2016-06-01

    Full Text Available Remotely sensed spectral imagery, geophysical (magnetic and gravity, and geodetic (elevation data are useful in a variety of Earth science applications such as environmental monitoring and mineral exploration. Using these data with Machine Learning Algorithms (MLA, which are widely used in image analysis and statistical pattern recognition applications, may enhance preliminary geological mapping and interpretation. This approach contributes towards a rapid and objective means of geological mapping in contrast to conventional field expedition techniques. In this study, four supervised MLAs (naïve Bayes, k-nearest neighbour, random forest, and support vector machines are compared in order to assess their performance for correctly identifying geological rocktypes in an area with complete ground validation information. Geological maps of the Sudbury region are used for calibration and validation. Percent of correct classifications was used as indicators of performance. Results show that random forest is the best approach. As expected, MLA performance improves with more calibration clusters, i.e. a more uniform distribution of calibration data over the study region. Performance is generally low, though geological trends that correspond to a ground validation map are visualized. Low performance may be the result of poor spectral images of bare rock which can be covered by vegetation or water. The distribution of calibration clusters and MLA input parameters affect the performance of the MLAs. Generally, performance improves with more uniform sampling, though this increases required computational effort and time. With the achievable performance levels in this study, the technique is useful in identifying regions of interest and identifying general rocktype trends. In particular, phase I geological site investigations will benefit from this approach and lead to the selection of sites for advanced surveys.

  1. Identifying Gatekeepers in Online Learning Networks

    Science.gov (United States)

    Gursakal, Necmi; Bozkurt, Aras

    2017-01-01

    The rise of the networked society has not only changed our perceptions but also the definitions, roles, processes and dynamics of online learning networks. From offline to online worlds, networks are everywhere and gatekeepers are an important entity in these networks. In this context, the purpose of this paper is to explore gatekeeping and…

  2. Spectral algorithms for heterogeneous biological networks.

    Science.gov (United States)

    McDonald, Martin; Higham, Desmond J; Vass, J Keith

    2012-11-01

    Spectral methods, which use information relating to eigenvectors, singular vectors and generalized singular vectors, help us to visualize and summarize sets of pairwise interactions. In this work, we motivate and discuss the use of spectral methods by taking a matrix computation view and applying concepts from applied linear algebra. We show that this unified approach is sufficiently flexible to allow multiple sources of network information to be combined. We illustrate the methods on microarray data arising from a large population-based study in human adipose tissue, combined with related information concerning metabolic pathways.

  3. AN INTELLIGENT VERTICAL HANDOVER DECISION ALGORITHM FOR WIRELESS HETEROGENEOUS NETWORKS

    OpenAIRE

    V. Anantha Narayanan; Rajeswari, A; Sureshkumar, V.

    2014-01-01

    The Next Generation Wireless Networks (NGWN) should be compatible with other communication technologies to offer the best connectivity to the mobile terminal which can access any IP based services at any time from any network without the knowledge of its user. It requires an intelligent vertical handover decision making algorithm to migrate between technologies that enable seamless mobility, always best connection and minimal terminal power consumption. Currently existing decision engines are...

  4. Sensor and ad-hoc networks theoretical and algorithmic aspects

    CERN Document Server

    Makki, S Kami; Pissinou, Niki; Makki, Shamila; Karimi, Masoumeh

    2008-01-01

    This book brings together leading researchers and developers in the field of wireless sensor networks to explain the special problems and challenges of the algorithmic aspects of sensor and ad-hoc networks. The book also fosters communication not only between the different sensor and ad-hoc communities, but also between those communities and the distributed systems and information systems communities. The topics addressed pertain to the sensors and mobile environment.

  5. Access Network Selection Based on Fuzzy Logic and Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Mohammed Alkhawlani

    2008-01-01

    Full Text Available In the next generation of heterogeneous wireless networks (HWNs, a large number of different radio access technologies (RATs will be integrated into a common network. In this type of networks, selecting the most optimal and promising access network (AN is an important consideration for overall networks stability, resource utilization, user satisfaction, and quality of service (QoS provisioning. This paper proposes a general scheme to solve the access network selection (ANS problem in the HWN. The proposed scheme has been used to present and design a general multicriteria software assistant (SA that can consider the user, operator, and/or the QoS view points. Combined fuzzy logic (FL and genetic algorithms (GAs have been used to give the proposed scheme the required scalability, flexibility, and simplicity. The simulation results show that the proposed scheme and SA have better and more robust performance over the random-based selection.

  6. Applied Graph-Mining Algorithms to Study Biomolecular Interaction Networks

    Science.gov (United States)

    2014-01-01

    Protein-protein interaction (PPI) networks carry vital information on the organization of molecular interactions in cellular systems. The identification of functionally relevant modules in PPI networks is one of the most important applications of biological network analysis. Computational analysis is becoming an indispensable tool to understand large-scale biomolecular interaction networks. Several types of computational methods have been developed and employed for the analysis of PPI networks. Of these computational methods, graph comparison and module detection are the two most commonly used strategies. This review summarizes current literature on graph kernel and graph alignment methods for graph comparison strategies, as well as module detection approaches including seed-and-extend, hierarchical clustering, optimization-based, probabilistic, and frequent subgraph methods. Herein, we provide a comprehensive review of the major algorithms employed under each theme, including our recently published frequent subgraph method, for detecting functional modules commonly shared across multiple cancer PPI networks. PMID:24800226

  7. District Heating Network Design and Configuration Optimization with Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Hongwei Li

    2013-12-01

    Full Text Available In this paper, the configuration of a district heating network which connects from the heating plant to the end users is optimized. Each end user in the network represents a building block. The connections between the heat generation plant and the end users are represented with mixed integer and the pipe friction and heat loss formulations are non-linear. In order to find the optimal district heating network configuration, genetic algorithm which handles the mixed integer nonlinear programming problem is chosen. The network configuration is represented with binary and integer encoding and is optimized in terms of the net present cost. The optimization results indicates that the optimal DH network configuration is determined by multiple factors such as the consumer heating load, the distance between the heating plant to the consumer, the design criteria regarding the pressure and temperature limitation, as well as the corresponding network heat loss.

  8. Fast grid layout algorithm for biological networks with sweep calculation.

    Science.gov (United States)

    Kojima, Kaname; Nagasaki, Masao; Miyano, Satoru

    2008-06-15

    Properly drawn biological networks are of great help in the comprehension of their characteristics. The quality of the layouts for retrieved biological networks is critical for pathway databases. However, since it is unrealistic to manually draw biological networks for every retrieval, automatic drawing algorithms are essential. Grid layout algorithms handle various biological properties such as aligning vertices having the same attributes and complicated positional constraints according to their subcellular localizations; thus, they succeed in providing biologically comprehensible layouts. However, existing grid layout algorithms are not suitable for real-time drawing, which is one of requisites for applications to pathway databases, due to their high-computational cost. In addition, they do not consider edge directions and their resulting layouts lack traceability for biochemical reactions and gene regulations, which are the most important features in biological networks. We devise a new calculation method termed sweep calculation and reduce the time complexity of the current grid layout algorithms through its encoding and decoding processes. We conduct practical experiments by using 95 pathway models of various sizes from TRANSPATH and show that our new grid layout algorithm is much faster than existing grid layout algorithms. For the cost function, we introduce a new component that penalizes undesirable edge directions to avoid the lack of traceability in pathways due to the differences in direction between in-edges and out-edges of each vertex. Java implementations of our layout algorithms are available in Cell Illustrator. masao@ims.u-tokyo.ac.jp Supplementary data are available at Bioinformatics online.

  9. Neural network design with combined backpropagation and creeping random search learning algorithms applied to the determination of retained austenite in TRIP steels; Diseno de redes neuronales con aprendizaje combinado de retropropagacion y busqueda aleatoria progresiva aplicado a la determinacion de austenita retenida en aceros TRIP

    Energy Technology Data Exchange (ETDEWEB)

    Toda-Caraballo, I.; Garcia-Mateo, C.; Capdevila, C.

    2010-07-01

    At the beginning of the decade of the nineties, the industrial interest for TRIP steels leads to a significant increase of the investigation and application in this field. In this work, the flexibility of neural networks for the modelling of complex properties is used to tackle the problem of determining the retained austenite content in TRIP-steel. Applying a combination of two learning algorithms (backpropagation and creeping-random-search) for the neural network, a model has been created that enables the prediction of retained austenite in low-Si / low-Al multiphase steels as a function of processing parameters. (Author). 34 refs.

  10. Continuous Online Sequence Learning with an Unsupervised Neural Network Model.

    Science.gov (United States)

    Cui, Yuwei; Ahmad, Subutar; Hawkins, Jeff

    2016-09-14

    The ability to recognize and predict temporal sequences of sensory inputs is vital for survival in natural environments. Based on many known properties of cortical neurons, hierarchical temporal memory (HTM) sequence memory recently has been proposed as a theoretical framework for sequence learning in the cortex. In this letter, we analyze properties of HTM sequence memory and apply it to sequence learning and prediction problems with streaming data. We show the model is able to continuously learn a large number of variableorder temporal sequences using an unsupervised Hebbian-like learning rule. The sparse temporal codes formed by the model can robustly handle branching temporal sequences by maintaining multiple predictions until there is sufficient disambiguating evidence. We compare the HTM sequence memory with other sequence learning algorithms, including statistical methods: autoregressive integrated moving average; feedforward neural networks-time delay neural network and online sequential extreme learning machine; and recurrent neural networks-long short-term memory and echo-state networks on sequence prediction problems with both artificial and real-world data. The HTM model achieves comparable accuracy to other state-of-the-art algorithms. The model also exhibits properties that are critical for sequence learning, including continuous online learning, the ability to handle multiple predictions and branching sequences with high-order statistics, robustness to sensor noise and fault tolerance, and good performance without task-specific hyperparameter tuning. Therefore, the HTM sequence memory not only advances our understanding of how the brain may solve the sequence learning problem but is also applicable to real-world sequence learning problems from continuous data streams.

  11. A Differentiated Anonymity Algorithm for Social Network Privacy Preservation

    Directory of Open Access Journals (Sweden)

    Yuqin Xie

    2016-12-01

    Full Text Available Devising methods to publish social network data in a form that affords utility without compromising privacy remains a longstanding challenge, while many existing methods based on k-anonymity algorithms on social networks may result in nontrivial utility loss without analyzing the social network topological structure and without considering the attributes of sparse distribution. Toward this objective, we explore the impact of the attributes of sparse distribution on data utility. Firstly, we propose a new utility metric that emphasizes network structure distortion and attribute value loss. Furthermore, we design and implement a differentiated k-anonymity l-diversity social network anonymity algorithm, which seeks to protect users’ privacy in social networks and increase the usability of the published anonymized data. Its key idea is that it divides a node into two child nodes and only anonymizes sensitive values to satisfy anonymity requirements. The evaluation results show that our method can effectively improve the data utility as compared to generalized anonymizing algorithms.

  12. Distributed Constrained Stochastic Subgradient Algorithms Based on Random Projection and Asynchronous Broadcast over Networks

    Directory of Open Access Journals (Sweden)

    Junlong Zhu

    2017-01-01

    Full Text Available We consider a distributed constrained optimization problem over a time-varying network, where each agent only knows its own cost functions and its constraint set. However, the local constraint set may not be known in advance or consists of huge number of components in some applications. To deal with such cases, we propose a distributed stochastic subgradient algorithm over time-varying networks, where the estimate of each agent projects onto its constraint set by using random projection technique and the implement of information exchange between agents by employing asynchronous broadcast communication protocol. We show that our proposed algorithm is convergent with probability 1 by choosing suitable learning rate. For constant learning rate, we obtain an error bound, which is defined as the expected distance between the estimates of agent and the optimal solution. We also establish an asymptotic upper bound between the global objective function value at the average of the estimates and the optimal value.

  13. A new mutually reinforcing network node and link ranking algorithm.

    Science.gov (United States)

    Wang, Zhenghua; Dueñas-Osorio, Leonardo; Padgett, Jamie E

    2015-10-23

    This study proposes a novel Normalized Wide network Ranking algorithm (NWRank) that has the advantage of ranking nodes and links of a network simultaneously. This algorithm combines the mutual reinforcement feature of Hypertext Induced Topic Selection (HITS) and the weight normalization feature of PageRank. Relative weights are assigned to links based on the degree of the adjacent neighbors and the Betweenness Centrality instead of assigning the same weight to every link as assumed in PageRank. Numerical experiment results show that NWRank performs consistently better than HITS, PageRank, eigenvector centrality, and edge betweenness from the perspective of network connectivity and approximate network flow, which is also supported by comparisons with the expensive N-1 benchmark removal criteria based on network efficiency. Furthermore, it can avoid some problems, such as the Tightly Knit Community effect, which exists in HITS. NWRank provides a new inexpensive way to rank nodes and links of a network, which has practical applications, particularly to prioritize resource allocation for upgrade of hierarchical and distributed networks, as well as to support decision making in the design of networks, where node and link importance depend on a balance of local and global integrity.

  14. A new mutually reinforcing network node and link ranking algorithm

    Science.gov (United States)

    Wang, Zhenghua; Dueñas-Osorio, Leonardo; Padgett, Jamie E.

    2015-10-01

    This study proposes a novel Normalized Wide network Ranking algorithm (NWRank) that has the advantage of ranking nodes and links of a network simultaneously. This algorithm combines the mutual reinforcement feature of Hypertext Induced Topic Selection (HITS) and the weight normalization feature of PageRank. Relative weights are assigned to links based on the degree of the adjacent neighbors and the Betweenness Centrality instead of assigning the same weight to every link as assumed in PageRank. Numerical experiment results show that NWRank performs consistently better than HITS, PageRank, eigenvector centrality, and edge betweenness from the perspective of network connectivity and approximate network flow, which is also supported by comparisons with the expensive N-1 benchmark removal criteria based on network efficiency. Furthermore, it can avoid some problems, such as the Tightly Knit Community effect, which exists in HITS. NWRank provides a new inexpensive way to rank nodes and links of a network, which has practical applications, particularly to prioritize resource allocation for upgrade of hierarchical and distributed networks, as well as to support decision making in the design of networks, where node and link importance depend on a balance of local and global integrity.

  15. Mitigate Cascading Failures on Networks using a Memetic Algorithm.

    Science.gov (United States)

    Tang, Xianglong; Liu, Jing; Hao, Xingxing

    2016-12-09

    Research concerning cascading failures in complex networks has become a hot topic. However, most of the existing studies have focused on modelling the cascading phenomenon on networks and analysing network robustness from a theoretical point of view, which considers only the damage incurred by the failure of one or several nodes. However, such a theoretical approach may not be useful in practical situation. Thus, we first design a much more practical measure to evaluate the robustness of networks against cascading failures, termed Rcf. Then, adopting Rcf as the objective function, we propose a new memetic algorithm (MA) named MA-Rcf to enhance network the robustness against cascading failures. Moreover, we design a new local search operator that considers the characteristics of cascading failures and operates by connecting nodes with a high probability of having similar loads. In experiments, both synthetic scale-free networks and real-world networks are used to test the efficiency and effectiveness of the MA-Rcf. We systematically investigate the effects of parameters on the performance of the MA-Rcf and validate the performance of the newly designed local search operator. The results show that the local search operator is effective, that MA-Rcf can enhance network robustness against cascading failures efficiently, and that it outperforms existing algorithms.

  16. Active Learning for Node Classification in Assortative and Disassortative Networks

    CERN Document Server

    Moore, Cristopher; Zhu, Yaojia; Rouquier, Jean-Baptiste; Lane, Terran

    2011-01-01

    In many real-world networks, nodes have class labels, attributes, or variables that affect the network's topology. If the topology of the network is known but the labels of the nodes are hidden, we would like to select a small subset of nodes such that, if we knew their labels, we could accurately predict the labels of all the other nodes. We develop an active learning algorithm for this problem which uses information-theoretic techniques to choose which nodes to explore. We test our algorithm on networks from three different domains: a social network, a network of English words that appear adjacently in a novel, and a marine food web. Our algorithm makes no initial assumptions about how the groups connect, and performs well even when faced with quite general types of network structure. In particular, we do not assume that nodes of the same class are more likely to be connected to each other---only that they connect to the rest of the network in similar ways.

  17. A Learning Dashboard to Monitor an Open Networked Learning Community

    Science.gov (United States)

    Grippa, Francesca; Secundo, Giustina; de Maggio, Marco

    This chapter proposes an operational model to monitor and assess an Open Networked Learning Community. Specifically, the model is based on the Intellectual Capital framework, along the Human, Structural and Social dimensions. It relies on the social network analysis to map several and complementary perspectives of a learning network. Its application allows to observe and monitor the cognitive behaviour of a learning community, in the final perspective of tracking and obtaining precious insights for value generation.

  18. Two neural network algorithms for designing optimal terminal controllers with open final time

    Science.gov (United States)

    Plumer, Edward S.

    1992-01-01

    Multilayer neural networks, trained by the backpropagation through time algorithm (BPTT), have been used successfully as state-feedback controllers for nonlinear terminal control problems. Current BPTT techniques, however, are not able to deal systematically with open final-time situations such as minimum-time problems. Two approaches which extend BPTT to open final-time problems are presented. In the first, a neural network learns a mapping from initial-state to time-to-go. In the second, the optimal number of steps for each trial run is found using a line-search. Both methods are derived using Lagrange multiplier techniques. This theoretical framework is used to demonstrate that the derived algorithms are direct extensions of forward/backward sweep methods used in N-stage optimal control. The two algorithms are tested on a Zermelo problem and the resulting trajectories compare favorably to optimal control results.

  19. Ripple-Spreading Network Model Optimization by Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Xiao-Bing Hu

    2013-01-01

    Full Text Available Small-world and scale-free properties are widely acknowledged in many real-world complex network systems, and many network models have been developed to capture these network properties. The ripple-spreading network model (RSNM is a newly reported complex network model, which is inspired by the natural ripple-spreading phenomenon on clam water surface. The RSNM exhibits good potential for describing both spatial and temporal features in the development of many real-world networks where the influence of a few local events spreads out through nodes and then largely determines the final network topology. However, the relationships between ripple-spreading related parameters (RSRPs of RSNM and small-world and scale-free topologies are not as obvious or straightforward as in many other network models. This paper attempts to apply genetic algorithm (GA to tune the values of RSRPs, so that the RSNM may generate these two most important network topologies. The study demonstrates that, once RSRPs are properly tuned by GA, the RSNM is capable of generating both network topologies and therefore has a great flexibility to study many real-world complex network systems.

  20. Interlog protein network: an evolutionary benchmark of protein interaction networks for the evaluation of clustering algorithms.

    Science.gov (United States)

    Jafari, Mohieddin; Mirzaie, Mehdi; Sadeghi, Mehdi

    2015-10-05

    In the field of network science, exploring principal and crucial modules or communities is critical in the deduction of relationships and organization of complex networks. This approach expands an arena, and thus allows further study of biological functions in the field of network biology. As the clustering algorithms that are currently employed in finding modules have innate uncertainties, external and internal validations are necessary. Sequence and network structure alignment, has been used to define the Interlog Protein Network (IPN). This network is an evolutionarily conserved network with communal nodes and less false-positive links. In the current study, the IPN is employed as an evolution-based benchmark in the validation of the module finding methods. The clustering results of five algorithms; Markov Clustering (MCL), Restricted Neighborhood Search Clustering (RNSC), Cartographic Representation (CR), Laplacian Dynamics (LD) and Genetic Algorithm; to find communities in Protein-Protein Interaction networks (GAPPI) are assessed by IPN in four distinct Protein-Protein Interaction Networks (PPINs). The MCL shows a more accurate algorithm based on this evolutionary benchmarking approach. Also, the biological relevance of proteins in the IPN modules generated by MCL is compatible with biological standard databases such as Gene Ontology, KEGG and Reactome. In this study, the IPN shows its potential for validation of clustering algorithms due to its biological logic and straightforward implementation.

  1. Teaching learning based optimization algorithm and its engineering applications

    CERN Document Server

    Rao, R Venkata

    2016-01-01

    Describing a new optimization algorithm, the “Teaching-Learning-Based Optimization (TLBO),” in a clear and lucid style, this book maximizes reader insights into how the TLBO algorithm can be used to solve continuous and discrete optimization problems involving single or multiple objectives. As the algorithm operates on the principle of teaching and learning, where teachers influence the quality of learners’ results, the elitist version of TLBO algorithm (ETLBO) is described along with applications of the TLBO algorithm in the fields of electrical engineering, mechanical design, thermal engineering, manufacturing engineering, civil engineering, structural engineering, computer engineering, electronics engineering, physics and biotechnology. The book offers a valuable resource for scientists, engineers and practitioners involved in the development and usage of advanced optimization algorithms.

  2. An Improved Brain-Inspired Emotional Learning Algorithm for Fast Classification

    Directory of Open Access Journals (Sweden)

    Ying Mei

    2017-06-01

    Full Text Available Classification is an important task of machine intelligence in the field of information. The artificial neural network (ANN is widely used for classification. However, the traditional ANN shows slow training speed, and it is hard to meet the real-time requirement for large-scale applications. In this paper, an improved brain-inspired emotional learning (BEL algorithm is proposed for fast classification. The BEL algorithm was put forward to mimic the high speed of the emotional learning mechanism in mammalian brain, which has the superior features of fast learning and low computational complexity. To improve the accuracy of BEL in classification, the genetic algorithm (GA is adopted for optimally tuning the weights and biases of amygdala and orbitofrontal cortex in the BEL neural network. The combinational algorithm named as GA-BEL has been tested on eight University of California at Irvine (UCI datasets and two well-known databases (Japanese Female Facial Expression, Cohn–Kanade. The comparisons of experiments indicate that the proposed GA-BEL is more accurate than the original BEL algorithm, and it is much faster than the traditional algorithm.

  3. Supervised learning algorithms for visual object categorization

    NARCIS (Netherlands)

    bin Abdullah, A.|info:eu-repo/dai/nl/304842052

    2010-01-01

    This thesis presents novel techniques for image recognition systems for better understanding image content. More specifically, it looks at the algorithmic aspects and experimental verification to demonstrate the capability of the proposed algorithms. These techniques aim to improve the three major

  4. DS+: Reliable Distributed Snapshot Algorithm for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Gamze Uslu

    2013-01-01

    Full Text Available Acquiring the snapshot of a distributed system helps gathering system related global state. In wireless sensor networks (WSNs, global state shows if a node is terminated or deadlock occurs along with many other situations which prevents a WSN from fully functioning. In this paper, we present a fully distributed snapshot acquisition algorithm adapted to tree topology wireless sensor networks (WSNs. Since snapshot acquisition is through control messages sent over highly lossy wireless channels and congested nodes, we enhanced the snapshot algorithm with a sink based reliability suit to achieve robustness. We analyzed the performance of the algorithm in terms of snapshot success ratio and response time in simulation and experimental small test bed environment. The results reveal that the proposed tailor made reliability model increases snapshot acquisition performance by a factor of seven and response time by a factor of two in a 30-node network. We have also shown that the proposed algorithm outperforms its counterparts in the specified network setting.

  5. The power-series algorithm for Markovian queueing networks

    NARCIS (Netherlands)

    van den Hout, W.B.; Blanc, J.P.C.

    1994-01-01

    A newversion of the Power-Series Algorithm is developed to compute the steady-state distribution of a rich class of Markovian queueing networks. The arrival process is a Multi-queue Markovian Arrival Process, which is a multi-queue generalization of the BMAP. It includes Poisson, fork and

  6. Multimedia over cognitive radio networks algorithms, protocols, and experiments

    CERN Document Server

    Hu, Fei

    2014-01-01

    PrefaceAbout the EditorsContributorsNetwork Architecture to Support Multimedia over CRNA Management Architecture for Multimedia Communication in Cognitive Radio NetworksAlexandru O. Popescu, Yong Yao, Markus Fiedler , and Adrian P. PopescuPaving a Wider Way for Multimedia over Cognitive Radios: An Overview of Wideband Spectrum Sensing AlgorithmsBashar I. Ahmad, Hongjian Sun, Cong Ling, and Arumugam NallanathanBargaining-Based Spectrum Sharing for Broadband Multimedia Services in Cognitive Radio NetworkYang Yan, Xiang Chen, Xiaofeng Zhong, Ming Zhao, and Jing WangPhysical Layer Mobility Challen

  7. Sequential Classification of Palm Gestures Based on A* Algorithm and MLP Neural Network for Quadrocopter Control

    Directory of Open Access Journals (Sweden)

    Wodziński Marek

    2017-06-01

    Full Text Available This paper presents an alternative approach to the sequential data classification, based on traditional machine learning algorithms (neural networks, principal component analysis, multivariate Gaussian anomaly detector and finding the shortest path in a directed acyclic graph, using A* algorithm with a regression-based heuristic. Palm gestures were used as an example of the sequential data and a quadrocopter was the controlled object. The study includes creation of a conceptual model and practical construction of a system using the GPU to ensure the realtime operation. The results present the classification accuracy of chosen gestures and comparison of the computation time between the CPU- and GPU-based solutions.

  8. Efficient generation of image chips for training deep learning algorithms

    Science.gov (United States)

    Han, Sanghui; Fafard, Alex; Kerekes, John; Gartley, Michael; Ientilucci, Emmett; Savakis, Andreas; Law, Charles; Parhan, Jason; Turek, Matt; Fieldhouse, Keith; Rovito, Todd

    2017-05-01

    Training deep convolutional networks for satellite or aerial image analysis often requires a large amount of training data. For a more robust algorithm, training data need to have variations not only in the background and target, but also radiometric variations in the image such as shadowing, illumination changes, atmospheric conditions, and imaging platforms with different collection geometry. Data augmentation is a commonly used approach to generating additional training data. However, this approach is often insufficient in accounting for real world changes in lighting, location or viewpoint outside of the collection geometry. Alternatively, image simulation can be an efficient way to augment training data that incorporates all these variations, such as changing backgrounds, that may be encountered in real data. The Digital Imaging and Remote Sensing Image Image Generation (DIRSIG) model is a tool that produces synthetic imagery using a suite of physics-based radiation propagation modules. DIRSIG can simulate images taken from different sensors with variation in collection geometry, spectral response, solar elevation and angle, atmospheric models, target, and background. Simulation of Urban Mobility (SUMO) is a multi-modal traffic simulation tool that explicitly models vehicles that move through a given road network. The output of the SUMO model was incorporated into DIRSIG to generate scenes with moving vehicles. The same approach was used when using helicopters as targets, but with slight modifications. Using the combination of DIRSIG and SUMO, we quickly generated many small images, with the target at the center with different backgrounds. The simulations generated images with vehicles and helicopters as targets, and corresponding images without targets. Using parallel computing, 120,000 training images were generated in about an hour. Some preliminary results show an improvement in the deep learning algorithm when real image training data are augmented with

  9. Online learning algorithm for ensemble of decision rules

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    We describe an online learning algorithm that builds a system of decision rules for a classification problem. Rules are constructed according to the minimum description length principle by a greedy algorithm or using the dynamic programming approach. © 2011 Springer-Verlag.

  10. Online Semi-Supervised Learning: Algorithm and Application in Metagenomics

    NARCIS (Netherlands)

    Imangaliyev, S.; Keijser, B.J.F.; Crielaard, W.; Tsivtsivadze, E.

    2013-01-01

    As the amount of metagenomic data grows rapidly, online statistical learning algorithms are poised to play key rolein metagenome analysis tasks. Frequently, data are only partially labeled, namely dataset contains partial information about the problem of interest. This work presents an algorithm and

  11. Online semi-supervised learning: algorithm and application in metagenomics

    NARCIS (Netherlands)

    Imangaliyev, S.; Keijser, B.J.; Crielaard, W.; Tsivtsivadze, E.; Li, G.Z.; Kim, S.; Hughes, M.; McLachlan, G.; Sun, H.; Hu, X.; Ressom, H.; Liu, B.; Liebman, M.

    2013-01-01

    As the amount of metagenomic data grows rapidly, online statistical learning algorithms are poised to play key role in metagenome analysis tasks. Frequently, data are only partially labeled, namely dataset contains partial information about the problem of interest. This work presents an algorithm

  12. A Compression Algorithm in Wireless Sensor Networks of Bearing Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Zheng Bin; Meng Qingfeng; Wang Nan [Theory of Lubrication and Bearing Institute, Xi' an Jiaotong University Xi' an, 710049 (China); Li Zhi, E-mail: rthree.zhengbin@stu.xjtu.edu.cn [Dalian Machine Tool Group Corp. Dalian, 116620 (China)

    2011-07-19

    The energy consumption of wireless sensor networks (WSNs) is always an important problem in the application of wireless sensor networks. This paper proposes a data compression algorithm to reduce amount of data and energy consumption during the data transmission process in the on-line WSNs-based bearing monitoring system. The proposed compression algorithm is based on lifting wavelets, Zerotree coding and Hoffman coding. Among of that, 5/3 lifting wavelets is used for dividing data into different frequency bands to extract signal characteristics. Zerotree coding is applied to calculate the dynamic thresholds to retain the attribute data. The attribute data are then encoded by Hoffman coding to further enhance the compression ratio. In order to validate the algorithm, simulation is carried out by using Matlab. The result of simulation shows that the proposed algorithm is very suitable for the compression of bearing monitoring data. The algorithm has been successfully used in online WSNs-based bearing monitoring system, in which TI DSP TMS320F2812 is used to realize the algorithm.

  13. Detection of Urban Damage Using Remote Sensing and Machine Learning Algorithms: Revisiting the 2010 Haiti Earthquake

    Directory of Open Access Journals (Sweden)

    Austin J. Cooner

    2016-10-01

    Full Text Available Remote sensing continues to be an invaluable tool in earthquake damage assessments and emergency response. This study evaluates the effectiveness of multilayer feedforward neural networks, radial basis neural networks, and Random Forests in detecting earthquake damage caused by the 2010 Port-au-Prince, Haiti 7.0 moment magnitude (Mw event. Additionally, textural and structural features including entropy, dissimilarity, Laplacian of Gaussian, and rectangular fit are investigated as key variables for high spatial resolution imagery classification. Our findings show that each of the algorithms achieved nearly a 90% kernel density match using the United Nations Operational Satellite Applications Programme (UNITAR/UNOSAT dataset as validation. The multilayer feedforward network was able to achieve an error rate below 40% in detecting damaged buildings. Spatial features of texture and structure were far more important in algorithmic classification than spectral information, highlighting the potential for future implementation of machine learning algorithms which use panchromatic or pansharpened imagery alone.

  14. A Self-Driven and Adaptive Adjusting Teaching Learning Method for Optimizing Optical Multicast Network Throughput

    Science.gov (United States)

    Liu, Huanlin; Xu, Yifan; Chen, Yong; Zhang, Mingjia

    2016-09-01

    With the development of one point to multiple point applications, network resources become scarcer and wavelength channels become more crowded in optical networks. To improve the bandwidth utilization, the multicast routing algorithm based on network coding can greatly increase the resource utilization, but it is most difficult to maximize the network throughput owing to ignoring the differences between the multicast receiving nodes. For making full use of the destination nodes' receives ability to maximize optical multicast's network throughput, a new optical multicast routing algorithm based on teaching-learning-based optimization (MR-iTLBO) is proposed in the paper. In order to increase the diversity of learning, a self-driven learning method is adopted in MR-iTLBO algorithm, and the mutation operator of genetic algorithm is introduced to prevent the algorithm into a local optimum. For increasing learner's learning efficiency, an adaptive learning factor is designed to adjust the learning process. Moreover, the reconfiguration scheme based on probability vector is devised to expand its global search capability in MR-iTLBO algorithm. The simulation results show that performance in terms of network throughput and convergence rate has been improved significantly with respect to the TLBO and the variant TLBO.

  15. IJA: An Efficient Algorithm for Query Processing in Sensor Networks

    Directory of Open Access Journals (Sweden)

    Dong Hwa Kim

    2011-01-01

    Full Text Available One of main features in sensor networks is the function that processes real time state information after gathering needed data from many domains. The component technologies consisting of each node called a sensor node that are including physical sensors, processors, actuators and power have advanced significantly over the last decade. Thanks to the advanced technology, over time sensor networks have been adopted in an all-round industry sensing physical phenomenon. However, sensor nodes in sensor networks are considerably constrained because with their energy and memory resources they have a very limited ability to process any information compared to conventional computer systems. Thus query processing over the nodes should be constrained because of their limitations. Due to the problems, the join operations in sensor networks are typically processed in a distributed manner over a set of nodes and have been studied. By way of example while simple queries, such as select and aggregate queries, in sensor networks have been addressed in the literature, the processing of join queries in sensor networks remains to be investigated. Therefore, in this paper, we propose and describe an Incremental Join Algorithm (IJA in Sensor Networks to reduce the overhead caused by moving a join pair to the final join node or to minimize the communication cost that is the main consumer of the battery when processing the distributed queries in sensor networks environments. At the same time, the simulation result shows that the proposed IJA algorithm significantly reduces the number of bytes to be moved to join nodes compared to the popular synopsis join algorithm.

  16. IJA: An Efficient Algorithm for Query Processing in Sensor Networks

    Science.gov (United States)

    Lee, Hyun Chang; Lee, Young Jae; Lim, Ji Hyang; Kim, Dong Hwa

    2011-01-01

    One of main features in sensor networks is the function that processes real time state information after gathering needed data from many domains. The component technologies consisting of each node called a sensor node that are including physical sensors, processors, actuators and power have advanced significantly over the last decade. Thanks to the advanced technology, over time sensor networks have been adopted in an all-round industry sensing physical phenomenon. However, sensor nodes in sensor networks are considerably constrained because with their energy and memory resources they have a very limited ability to process any information compared to conventional computer systems. Thus query processing over the nodes should be constrained because of their limitations. Due to the problems, the join operations in sensor networks are typically processed in a distributed manner over a set of nodes and have been studied. By way of example while simple queries, such as select and aggregate queries, in sensor networks have been addressed in the literature, the processing of join queries in sensor networks remains to be investigated. Therefore, in this paper, we propose and describe an Incremental Join Algorithm (IJA) in Sensor Networks to reduce the overhead caused by moving a join pair to the final join node or to minimize the communication cost that is the main consumer of the battery when processing the distributed queries in sensor networks environments. At the same time, the simulation result shows that the proposed IJA algorithm significantly reduces the number of bytes to be moved to join nodes compared to the popular synopsis join algorithm. PMID:22319375

  17. Genetic algorithm application in optimization of wireless sensor networks.

    Science.gov (United States)

    Norouzi, Ali; Zaim, A Halim

    2014-01-01

    There are several applications known for wireless sensor networks (WSN), and such variety demands improvement of the currently available protocols and the specific parameters. Some notable parameters are lifetime of network and energy consumption for routing which play key role in every application. Genetic algorithm is one of the nonlinear optimization methods and relatively better option thanks to its efficiency for large scale applications and that the final formula can be modified by operators. The present survey tries to exert a comprehensive improvement in all operational stages of a WSN including node placement, network coverage, clustering, and data aggregation and achieve an ideal set of parameters of routing and application based WSN. Using genetic algorithm and based on the results of simulations in NS, a specific fitness function was achieved, optimized, and customized for all the operational stages of WSNs.

  18. A blind matching algorithm for cognitive radio networks

    KAUST Repository

    Hamza, Doha R.

    2016-08-15

    We consider a cognitive radio network where secondary users (SUs) are allowed access time to the spectrum belonging to the primary users (PUs) provided that they relay primary messages. PUs and SUs negotiate over allocations of the secondary power that will be used to relay PU data. We formulate the problem as a generalized assignment market to find an epsilon pairwise stable matching. We propose a distributed blind matching algorithm (BLMA) to produce the pairwise-stable matching plus the associated power allocations. We stipulate a limited information exchange in the network so that agents only calculate their own utilities but no information is available about the utilities of any other users in the network. We establish convergence to epsilon pairwise stable matchings in finite time. Finally we show that our algorithm exhibits a limited degradation in PU utility when compared with the Pareto optimal results attained using perfect information assumptions. © 2016 IEEE.

  19. GENETIC ALGORITHM BASED CONCEPT DESIGN TO OPTIMIZE NETWORK LOAD BALANCE

    Directory of Open Access Journals (Sweden)

    Ashish Jain

    2012-07-01

    Full Text Available Multiconstraints optimal network load balancing is an NP-hard problem and it is an important part of traffic engineering. In this research we balance the network load using classical method (brute force approach and dynamic programming is used but result shows the limitation of this method but at a certain level we recognized that the optimization of balanced network load with increased number of nodes and demands is intractable using the classical method because the solution set increases exponentially. In such case the optimization techniques like evolutionary techniques can employ for optimizing network load balance. In this paper we analyzed proposed classical algorithm and evolutionary based genetic approach is devise as well as proposed in this paper for optimizing the balance network load.

  20. Resistive Network Optimal Power Flow: Uniqueness and Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Tan, CW; Cai, DWH; Lou, X

    2015-01-01

    The optimal power flow (OPF) problem minimizes the power loss in an electrical network by optimizing the voltage and power delivered at the network buses, and is a nonconvex problem that is generally hard to solve. By leveraging a recent development on the zero duality gap of OPF, we propose a second-order cone programming convex relaxation of the resistive network OPF, and study the uniqueness of the optimal solution using differential topology, especially the Poincare-Hopf Index Theorem. We characterize the global uniqueness for different network topologies, e.g., line, radial, and mesh networks. This serves as a starting point to design distributed local algorithms with global behaviors that have low complexity, are computationally fast, and can run under synchronous and asynchronous settings in practical power grids.

  1. Intelligent Control of Urban Road Networks: Algorithms, Systems and Communications

    Science.gov (United States)

    Smith, Mike

    This paper considers control in road networks. Using a simple example based on the well-known Braess network [1] the paper shows that reducing delay for traffic, assuming that the traffic distribution is fixed, may increase delay when travellers change their travel choices in light of changes in control settings and hence delays. It is shown that a similar effect occurs within signal controlled networks. In this case the effect appears at first sight to be much stronger: the overall capacity of a network may be substantially reduced by utilising standard responsive signal control algorithms. In seeking to reduce delays for existing flows, these policies do not allow properly for consequent routeing changes by travellers. Control methods for signal-controlled networks that do take proper account of the reactions of users are suggested; these require further research, development, and careful real-life trials.

  2. Adaptive Load-Balancing Algorithms using Symmetric Broadcast Networks

    Science.gov (United States)

    Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    In a distributed computing environment, it is important to ensure that the processor workloads are adequately balanced, Among numerous load-balancing algorithms, a unique approach due to Das and Prasad defines a symmetric broadcast network (SBN) that provides a robust communication pattern among the processors in a topology-independent manner. In this paper, we propose and analyze three efficient SBN-based dynamic load-balancing algorithms, and implement them on an SGI Origin2000. A thorough experimental study with Poisson distributed synthetic loads demonstrates that our algorithms are effective in balancing system load. By optimizing completion time and idle time, the proposed algorithms are shown to compare favorably with several existing approaches.

  3. A Message-Passing Algorithm for Wireless Network Scheduling.

    Science.gov (United States)

    Paschalidis, Ioannis Ch; Huang, Fuzhuo; Lai, Wei

    2015-10-01

    We consider scheduling in wireless networks and formulate it as Maximum Weighted Independent Set (MWIS) problem on a "conflict" graph that captures interference among simultaneous transmissions. We propose a novel, low-complexity, and fully distributed algorithm that yields high-quality feasible solutions. Our proposed algorithm consists of two phases, each of which requires only local information and is based on message-passing. The first phase solves a relaxation of the MWIS problem using a gradient projection method. The relaxation we consider is tighter than the simple linear programming relaxation and incorporates constraints on all cliques in the graph. The second phase of the algorithm starts from the solution of the relaxation and constructs a feasible solution to the MWIS problem. We show that our algorithm always outputs an optimal solution to the MWIS problem for perfect graphs. Simulation results compare our policies against Carrier Sense Multiple Access (CSMA) and other alternatives and show excellent performance.

  4. Resource-Aware Data Fusion Algorithms for Wireless Sensor Networks

    CERN Document Server

    Abdelgawad, Ahmed

    2012-01-01

    This book introduces resource-aware data fusion algorithms to gather and combine data from multiple sources (e.g., sensors) in order to achieve inferences.  These techniques can be used in centralized and distributed systems to overcome sensor failure, technological limitation, and spatial and temporal coverage problems. The algorithms described in this book are evaluated with simulation and experimental results to show they will maintain data integrity and make data useful and informative.   Describes techniques to overcome real problems posed by wireless sensor networks deployed in circumstances that might interfere with measurements provided, such as strong variations of pressure, temperature, radiation, and electromagnetic noise; Uses simulation and experimental results to evaluate algorithms presented and includes real test-bed; Includes case study implementing data fusion algorithms on a remote monitoring framework for sand production in oil pipelines.

  5. Network anomaly detection a machine learning perspective

    CERN Document Server

    Bhattacharyya, Dhruba Kumar

    2013-01-01

    With the rapid rise in the ubiquity and sophistication of Internet technology and the accompanying growth in the number of network attacks, network intrusion detection has become increasingly important. Anomaly-based network intrusion detection refers to finding exceptional or nonconforming patterns in network traffic data compared to normal behavior. Finding these anomalies has extensive applications in areas such as cyber security, credit card and insurance fraud detection, and military surveillance for enemy activities. Network Anomaly Detection: A Machine Learning Perspective presents mach

  6. Two Novel On-policy Reinforcement Learning Algorithms based on TD(lambda)-methods

    NARCIS (Netherlands)

    Wiering, M.A.; Hasselt, H. van

    2007-01-01

    This paper describes two novel on-policy reinforcement learning algorithms, named QV(lambda)-learning and the actor critic learning automaton (ACLA). Both algorithms learn a state value-function using TD(lambda)-methods. The difference between the algorithms is that QV-learning uses the learned

  7. A Survey of Linear Network Coding and Network Error Correction Code Constructions and Algorithms

    Directory of Open Access Journals (Sweden)

    Michele Sanna

    2011-01-01

    Full Text Available Network coding was introduced by Ahlswede et al. in a pioneering work in 2000. This paradigm encompasses coding and retransmission of messages at the intermediate nodes of the network. In contrast with traditional store-and-forward networking, network coding increases the throughput and the robustness of the transmission. Linear network coding is a practical implementation of this new paradigm covered by several research works that include rate characterization, error-protection coding, and construction of codes. Especially determining the coding characteristics has its importance in providing the premise for an efficient transmission. In this paper, we review the recent breakthroughs in linear network coding for acyclic networks with a survey of code constructions literature. Deterministic construction algorithms and randomized procedures are presented for traditional network coding and for network-control network coding.

  8. ARCHITECTURES AND ALGORITHMS FOR COGNITIVE NETWORKS ENABLED BY QUALITATIVE MODELS

    DEFF Research Database (Denmark)

    Balamuralidhar, P.

    2013-01-01

    Complexity of communication networks is ever increasing and getting complicated by their heterogeneity and dynamism. Traditional techniques are facing challenges in network performance management. Cognitive networking is an emerging paradigm to make networks more intelligent, thereby overcoming...... traditional limitations and potentially achieving better performance. The vision is that, networks should be able to monitor themselves, reason upon changes in self and environment, act towards the achievement of specific goals and learn from experience. The concept of a Cognitive Engine (CE) supporting...... cognitive functions, as part of network elements, enabling above said autonomic capabilities is gathering attention. Awareness of the self and the world is an important aspect of the cognitive engine to be autonomic. This is achieved through embedding their models in the engine, but the complexity...

  9. Consensus algorithm in smart grid and communication networks

    Science.gov (United States)

    Alfagee, Husain Abdulaziz

    On a daily basis, consensus theory attracts more and more researches from different areas of interest, to apply its techniques to solve technical problems in a way that is faster, more reliable, and even more precise than ever before. A power system network is one of those fields that consensus theory employs extensively. The use of the consensus algorithm to solve the Economic Dispatch and Load Restoration Problems is a good example. Instead of a conventional central controller, some researchers have explored an algorithm to solve the above mentioned problems, in a distribution manner, using the consensus algorithm, which is based on calculation methods, i.e., non estimation methods, for updating the information consensus matrix. Starting from this point of solving these types of problems mentioned, specifically, in a distribution fashion, using the consensus algorithm, we have implemented a new advanced consensus algorithm. It is based on the adaptive estimation techniques, such as the Gradient Algorithm and the Recursive Least Square Algorithm, to solve the same problems. This advanced work was tested on different case studies that had formerly been explored, as seen in references 5, 7, and 18. Three and five generators, or agents, with different topologies, correspond to the Economic Dispatch Problem and the IEEE 16-Bus power system corresponds to the Load Restoration Problem. In all the cases we have studied, the results met our expectations with extreme accuracy, and completely matched the results of the previous researchers. There is little question that this research proves the capability and dependability of using the consensus algorithm, based on the estimation methods as the Gradient Algorithm and the Recursive Least Square Algorithm to solve such power problems.

  10. Improved Fault Classification in Series Compensated Transmission Line: Comparative Evaluation of Chebyshev Neural Network Training Algorithms.

    Science.gov (United States)

    Vyas, Bhargav Y; Das, Biswarup; Maheshwari, Rudra Prakash

    2016-08-01

    This paper presents the Chebyshev neural network (ChNN) as an improved artificial intelligence technique for power system protection studies and examines the performances of two ChNN learning algorithms for fault classification of series compensated transmission line. The training algorithms are least-square Levenberg-Marquardt (LSLM) and recursive least-square algorithm with forgetting factor (RLSFF). The performances of these algorithms are assessed based on their generalization capability in relating the fault current parameters with an event of fault in the transmission line. The proposed algorithm is fast in response as it utilizes postfault samples of three phase currents measured at the relaying end corresponding to half-cycle duration only. After being trained with only a small part of the generated fault data, the algorithms have been tested over a large number of fault cases with wide variation of system and fault parameters. Based on the studies carried out in this paper, it has been found that although the RLSFF algorithm is faster for training the ChNN in the fault classification application for series compensated transmission lines, the LSLM algorithm has the best accuracy in testing. The results prove that the proposed ChNN-based method is accurate, fast, easy to design, and immune to the level of compensations. Thus, it is suitable for digital relaying applications.

  11. A fuzzified BRAIN algorithm for learning DNF from incomplete data

    CERN Document Server

    Rampone, Salvatore

    2010-01-01

    Aim of this paper is to address the problem of learning Boolean functions from training data with missing values. We present an extension of the BRAIN algorithm, called U-BRAIN (Uncertainty-managing Batch Relevance-based Artificial INtelligence), conceived for learning DNF Boolean formulas from partial truth tables, possibly with uncertain values or missing bits. Such an algorithm is obtained from BRAIN by introducing fuzzy sets in order to manage uncertainty. In the case where no missing bits are present, the algorithm reduces to the original BRAIN.

  12. Towards a Pattern Language for Networked Learning

    NARCIS (Netherlands)

    Goodyear, Peter; Avgeriou, Paris; Baggetun, Rune; Bartoluzzi, Sonia; Retalis, Simeon; Ronteltap, Frans; Rusman, Ellen

    2004-01-01

    The work of designing a useful, convivial networked learning environment is complex and demanding. People new to designing for networked learning face a number of major challenges when they try to draw on the experience of others – whether that experience is shared informally, in the everyday

  13. Learning dynamic Bayesian networks with mixed variables

    DEFF Research Database (Denmark)

    Bøttcher, Susanne Gammelgaard

    This paper considers dynamic Bayesian networks for discrete and continuous variables. We only treat the case, where the distribution of the variables is conditional Gaussian. We show how to learn the parameters and structure of a dynamic Bayesian network and also how the Markov order can be learned...

  14. Learning and structure of neuronal networks

    Indian Academy of Sciences (India)

    We study the effect of learning dynamics on network topology. Firstly, a network of discrete dynamical systems is considered for this purpose and the coupling strengths are made to evolve according to a temporal learning rule that is based on the paradigm of spike-time-dependent plasticity (STDP). This incorporates ...

  15. Learning and structure of neuronal networks

    Indian Academy of Sciences (India)

    Corresponding author. E-mail: Kiran.Kolwankar@gmail.com. Abstract. We study the effect of learning dynamics on network topology. Firstly, a network of dis- crete dynamical systems is considered for this purpose and the coupling strengths are made to evolve according to a temporal learning rule that is based on the ...

  16. Personalized Learning Network Teaching Model

    Science.gov (United States)

    Feng, Zhou

    Adaptive learning system on the salient features, expounded personalized learning is adaptive learning system adaptive to learners key to learning. From the perspective of design theory, put forward an adaptive learning system to learn design thinking individual model, and using data mining techniques, the initial establishment of personalized adaptive systems model of learning.

  17. DANoC: An Efficient Algorithm and Hardware Codesign of Deep Neural Networks on Chip.

    Science.gov (United States)

    Zhou, Xichuan; Li, Shengli; Tang, Fang; Hu, Shengdong; Lin, Zhi; Zhang, Lei

    2017-07-18

    Deep neural networks (NNs) are the state-of-the-art models for understanding the content of images and videos. However, implementing deep NNs in embedded systems is a challenging task, e.g., a typical deep belief network could exhaust gigabytes of memory and result in bandwidth and computational bottlenecks. To address this challenge, this paper presents an algorithm and hardware codesign for efficient deep neural computation. A hardware-oriented deep learning algorithm, named the deep adaptive network, is proposed to explore the sparsity of neural connections. By adaptively removing the majority of neural connections and robustly representing the reserved connections using binary integers, the proposed algorithm could save up to 99.9% memory utility and computational resources without undermining classification accuracy. An efficient sparse-mapping-memory-based hardware architecture is proposed to fully take advantage of the algorithmic optimization. Different from traditional Von Neumann architecture, the deep-adaptive network on chip (DANoC) brings communication and computation in close proximity to avoid power-hungry parameter transfers between on-board memory and on-chip computational units. Experiments over different image classification benchmarks show that the DANoC system achieves competitively high accuracy and efficiency comparing with the state-of-the-art approaches.

  18. Properties of healthcare teaming networks as a function of network construction algorithms

    Science.gov (United States)

    Trayhan, Melissa; Farooq, Samir A.; Fucile, Christopher; Ghoshal, Gourab; White, Robert J.; Quill, Caroline M.; Rosenberg, Alexander; Barbosa, Hugo Serrano; Bush, Kristen; Chafi, Hassan; Boudreau, Timothy

    2017-01-01

    Network models of healthcare systems can be used to examine how providers collaborate, communicate, refer patients to each other, and to map how patients traverse the network of providers. Most healthcare service network models have been constructed from patient claims data, using billing claims to link a patient with a specific provider in time. The data sets can be quite large (106–108 individual claims per year), making standard methods for network construction computationally challenging and thus requiring the use of alternate construction algorithms. While these alternate methods have seen increasing use in generating healthcare networks, there is little to no literature comparing the differences in the structural properties of the generated networks, which as we demonstrate, can be dramatically different. To address this issue, we compared the properties of healthcare networks constructed using different algorithms from 2013 Medicare Part B outpatient claims data. Three different algorithms were compared: binning, sliding frame, and trace-route. Unipartite networks linking either providers or healthcare organizations by shared patients were built using each method. We find that each algorithm produced networks with substantially different topological properties, as reflected by numbers of edges, network density, assortativity, clustering coefficients and other structural measures. Provider networks adhered to a power law, while organization networks were best fit by a power law with exponential cutoff. Censoring networks to exclude edges with less than 11 shared patients, a common de-identification practice for healthcare network data, markedly reduced edge numbers and network density, and greatly altered measures of vertex prominence such as the betweenness centrality. Data analysis identified patterns in the distance patients travel between network providers, and a striking set of teaming relationships between providers in the Northeast United States and

  19. Genetic Algorithm Learning in a New Keynesian Macroeconomic Setup

    NARCIS (Netherlands)

    Hommes, C.; Makarewicz, T.; Massaro, D.; Smits, T.

    2015-01-01

    In order to understand heterogeneous behaviour amongst agents, empirical data from Learning-to-Forecast (LtF) experiments can be used to construct learning models. This paper follows up on Assenza et al. (2013) by using a genetic algorithms (GA) model to replicate the results from their LtF

  20. An Approximation of the Error Backpropagation Algorithm in a Predictive Coding Network with Local Hebbian Synaptic Plasticity.

    Science.gov (United States)

    Whittington, James C R; Bogacz, Rafal

    2017-05-01

    To efficiently learn from feedback, cortical networks need to update synaptic weights on multiple levels of cortical hierarchy. An effective and well-known algorithm for computing such changes in synaptic weights is the error backpropagation algorithm. However, in this algorithm, the change in synaptic weights is a complex function of weights and activities of neurons not directly connected with the synapse being modified, whereas the changes in biological synapses are determined only by the activity of presynaptic and postsynaptic neurons. Several models have been proposed that approximate the backpropagation algorithm with local synaptic plasticity, but these models require complex external control over the network or relatively complex plasticity rules. Here we show that a network developed in the predictive coding framework can efficiently perform supervised learning fully autonomously, employing only simple local Hebbian plasticity. Furthermore, for certain parameters, the weight change in the predictive coding model converges to that of the backpropagation algorithm. This suggests that it is possible for cortical networks with simple Hebbian synaptic plasticity to implement efficient learning algorithms in which synapses in areas on multiple levels of hierarchy are modified to minimize the error on the output.

  1. GPS-free localization algorithm for wireless sensor networks.

    Science.gov (United States)

    Wang, Lei; Xu, Qingzheng

    2010-01-01

    Localization is one of the most fundamental problems in wireless sensor networks, since the locations of the sensor nodes are critical to both network operations and most application level tasks. A GPS-free localization scheme for wireless sensor networks is presented in this paper. First, we develop a standardized clustering-based approach for the local coordinate system formation wherein a multiplication factor is introduced to regulate the number of master and slave nodes and the degree of connectivity among master nodes. Second, using homogeneous coordinates, we derive a transformation matrix between two Cartesian coordinate systems to efficiently merge them into a global coordinate system and effectively overcome the flip ambiguity problem. The algorithm operates asynchronously without a centralized controller; and does not require that the location of the sensors be known a priori. A set of parameter-setting guidelines for the proposed algorithm is derived based on a probability model and the energy requirements are also investigated. A simulation analysis on a specific numerical example is conducted to validate the mathematical analytical results. We also compare the performance of the proposed algorithm under a variety multiplication factor, node density and node communication radius scenario. Experiments show that our algorithm outperforms existing mechanisms in terms of accuracy and convergence time.

  2. Distributed interference alignment iterative algorithms in symmetric wireless network

    Directory of Open Access Journals (Sweden)

    YANG Jingwen

    2015-02-01

    Full Text Available Interference alignment is a novel interference alignment way,which is widely noted all of the world.Interference alignment overlaps interference in the same signal space at receiving terminal by precoding so as to thoroughly eliminate the influence of interference impacted on expected signals,thus making the desire user achieve the maximum degree of freedom.In this paper we research three typical algorithms for realizing interference alignment,including minimizing the leakage interference,maximizing Signal to Interference plus Noise Ratio (SINR and minimizing mean square error(MSE.All of these algorithms utilize the reciprocity of wireless network,and iterate the precoders between original network and the reverse network so as to achieve interference alignment.We use the uplink transmit rate to analyze the performance of these three algorithms.Numerical simulation results show the advantages of these algorithms.which is the foundation for the further study in the future.The feasibility and future of interference alignment are also discussed at last.

  3. Content-Based Multi-Channel Network Coding Algorithm in the Millimeter-Wave Sensor Network.

    Science.gov (United States)

    Lin, Kai; Wang, Di; Hu, Long

    2016-07-01

    With the development of wireless technology, the widespread use of 5G is already an irreversible trend, and millimeter-wave sensor networks are becoming more and more common. However, due to the high degree of complexity and bandwidth bottlenecks, the millimeter-wave sensor network still faces numerous problems. In this paper, we propose a novel content-based multi-channel network coding algorithm, which uses the functions of data fusion, multi-channel and network coding to improve the data transmission; the algorithm is referred to as content-based multi-channel network coding (CMNC). The CMNC algorithm provides a fusion-driven model based on the Dempster-Shafer (D-S) evidence theory to classify the sensor nodes into different classes according to the data content. By using the result of the classification, the CMNC algorithm also provides the channel assignment strategy and uses network coding to further improve the quality of data transmission in the millimeter-wave sensor network. Extensive simulations are carried out and compared to other methods. Our simulation results show that the proposed CMNC algorithm can effectively improve the quality of data transmission and has better performance than the compared methods.

  4. Content-Based Multi-Channel Network Coding Algorithm in the Millimeter-Wave Sensor Network

    Directory of Open Access Journals (Sweden)

    Kai Lin

    2016-07-01

    Full Text Available With the development of wireless technology, the widespread use of 5G is already an irreversible trend, and millimeter-wave sensor networks are becoming more and more common. However, due to the high degree of complexity and bandwidth bottlenecks, the millimeter-wave sensor network still faces numerous problems. In this paper, we propose a novel content-based multi-channel network coding algorithm, which uses the functions of data fusion, multi-channel and network coding to improve the data transmission; the algorithm is referred to as content-based multi-channel network coding (CMNC. The CMNC algorithm provides a fusion-driven model based on the Dempster-Shafer (D-S evidence theory to classify the sensor nodes into different classes according to the data content. By using the result of the classification, the CMNC algorithm also provides the channel assignment strategy and uses network coding to further improve the quality of data transmission in the millimeter-wave sensor network. Extensive simulations are carried out and compared to other methods. Our simulation results show that the proposed CMNC algorithm can effectively improve the quality of data transmission and has better performance than the compared methods.

  5. Learning motor skills from algorithms to robot experiments

    CERN Document Server

    Kober, Jens

    2014-01-01

    This book presents the state of the art in reinforcement learning applied to robotics both in terms of novel algorithms and applications. It discusses recent approaches that allow robots to learn motor skills and presents tasks that need to take into account the dynamic behavior of the robot and its environment, where a kinematic movement plan is not sufficient. The book illustrates a method that learns to generalize parameterized motor plans which is obtained by imitation or reinforcement learning, by adapting a small set of global parameters, and appropriate kernel-based reinforcement learning algorithms. The presented applications explore highly dynamic tasks and exhibit a very efficient learning process. All proposed approaches have been extensively validated with benchmarks tasks, in simulation, and on real robots. These tasks correspond to sports and games but the presented techniques are also applicable to more mundane household tasks. The book is based on the first author’s doctoral thesis, which wo...

  6. A multi-agent genetic algorithm for community detection in complex networks

    Science.gov (United States)

    Li, Zhangtao; Liu, Jing

    2016-05-01

    Complex networks are popularly used to represent a lot of practical systems in the domains of biology and sociology, and the structure of community is one of the most important network attributes which has received an enormous amount of attention. Community detection is the process of discovering the community structure hidden in complex networks, and modularity Q is one of the best known quality functions measuring the quality of communities of networks. In this paper, a multi-agent genetic algorithm, named as MAGA-Net, is proposed to optimize modularity value for the community detection. An agent, coded by a division of a network, represents a candidate solution. All agents live in a lattice-like environment, with each agent fixed on a lattice point. A series of operators are designed, namely split and merging based neighborhood competition operator, hybrid neighborhood crossover, adaptive mutation and self-learning operator, to increase modularity value. In the experiments, the performance of MAGA-Net is validated on both well-known real-world benchmark networks and large-scale synthetic LFR networks with 5000 nodes. The systematic comparisons with GA-Net and Meme-Net show that MAGA-Net outperforms these two algorithms, and can detect communities with high speed, accuracy and stability.

  7. Iterative Learning Control with Forgetting Factor for Urban Road Network

    Directory of Open Access Journals (Sweden)

    Tianyi Lan

    2017-01-01

    Full Text Available In order to improve the traffic condition, a novel iterative learning control (ILC algorithm with forgetting factor for urban road network is proposed by using the repeat characteristics of traffic flow in this paper. Rigorous analysis shows that the proposed ILC algorithm can guarantee the asymptotic convergence. Through iterative learning control of the traffic signals, the number of vehicles on each road in the network can gradually approach the desired level, thereby preventing oversaturation and traffic congestion. The introduced forgetting factor can effectively adjust the control input according to the states of the system and filter along the direction of the iteration. The results show that the forgetting factor has an important effect on the robustness of the system. The theoretical analysis and experimental simulations are given to verify the validity of the proposed method.

  8. New Scheduling Algorithms for Agile All-Photonic Networks

    Science.gov (United States)

    Mehri, Mohammad Saleh; Ghaffarpour Rahbar, Akbar

    2017-12-01

    An optical overlaid star network is a class of agile all-photonic networks that consists of one or more core node(s) at the center of the star network and a number of edge nodes around the core node. In this architecture, a core node may use a scheduling algorithm for transmission of traffic through the network. A core node is responsible for scheduling optical packets that arrive from edge nodes and switching them toward their destinations. Nowadays, most edge nodes use virtual output queue (VOQ) architecture for buffering client packets to achieve high throughput. This paper presents two efficient scheduling algorithms called discretionary iterative matching (DIM) and adaptive DIM. These schedulers find maximum matching in a small number of iterations and provide high throughput and incur low delay. The number of arbiters in these schedulers and the number of messages exchanged between inputs and outputs of a core node are reduced. We show that DIM and adaptive DIM can provide better performance in comparison with iterative round-robin matching with SLIP (iSLIP). SLIP means the act of sliding for a short distance to select one of the requested connections based on the scheduling algorithm.

  9. Quantifying the multi-scale performance of network inference algorithms.

    Science.gov (United States)

    Oates, Chris J; Amos, Richard; Spencer, Simon E F

    2014-10-01

    Graphical models are widely used to study complex multivariate biological systems. Network inference algorithms aim to reverse-engineer such models from noisy experimental data. It is common to assess such algorithms using techniques from classifier analysis. These metrics, based on ability to correctly infer individual edges, possess a number of appealing features including invariance to rank-preserving transformation. However, regulation in biological systems occurs on multiple scales and existing metrics do not take into account the correctness of higher-order network structure. In this paper novel performance scores are presented that share the appealing properties of existing scores, whilst capturing ability to uncover regulation on multiple scales. Theoretical results confirm that performance of a network inference algorithm depends crucially on the scale at which inferences are to be made; in particular strong local performance does not guarantee accurate reconstruction of higher-order topology. Applying these scores to a large corpus of data from the DREAM5 challenge, we undertake a data-driven assessment of estimator performance. We find that the "wisdom of crowds" network, that demonstrated superior local performance in the DREAM5 challenge, is also among the best performing methodologies for inference of regulation on multiple length scales.

  10. The effects of cultural learning in populations of neural networks.

    Science.gov (United States)

    Curran, Dara; O'Riordan, Colm

    2007-01-01

    Population learning can be described as the iterative Darwinian process of fitness-based selection and genetic transfer of information leading to populations of higher fitness and is often simulated using genetic algorithms. Cultural learning describes the process of information transfer between individuals in a population through non-genetic means. Cultural learning has been simulated by combining genetic algorithms and neural networks using a teacher-pupil scenario where highly fit individuals are selected as teachers and instruct the next generation. By examining the innate fitness of a population (i.e., the fitness of the population measured before any cultural learning takes place), it is possible to examine the effects of cultural learning on the population's genetic makeup. Our model explores the effect of cultural learning on a population and employs three benchmark sequential decision tasks as the evolutionary task for the population: connect-four, tic-tac-toe, and blackjack. Experiments are conducted with populations employing population learning alone and populations combining population and cultural learning. The article presents results showing the gradual transfer of knowledge from genes to the cultural process, illustrated by the simultaneous decrease in the population's innate fitness and the increase of its acquired fitness measured after learning takes place.

  11. deal: A Package for Learning Bayesian Networks

    Directory of Open Access Journals (Sweden)

    Susanne G. Boettcher

    2003-12-01

    Full Text Available deal is a software package for use with R. It includes several methods for analysing data using Bayesian networks with variables of discrete and/or continuous types but restricted to conditionally Gaussian networks. Construction of priors for network parameters is supported and their parameters can be learned from data using conjugate updating. The network score is used as a metric to learn the structure of the network and forms the basis of a heuristic search strategy. deal has an interface to Hugin.

  12. Comparative Study of Machine Learning Algorithms for Heart Disease Prediction

    OpenAIRE

    Acharya, Abhisek

    2017-01-01

    As technology and hardware capability advance machine learning is also advancing and the use of it is growing in every field from stock analysis to medical image processing. Heart disease prediction is one of the fields where machine learning can be implemented. Therefore, this study investigates the different machine learning algorithms and compares the results using different performance metrics i.e., accuracy, precision, recall, f1-score etc. The dataset used for this study was taken from ...

  13. Max-margin weight learning for medical knowledge network.

    Science.gov (United States)

    Jiang, Jingchi; Xie, Jing; Zhao, Chao; Su, Jia; Guan, Yi; Yu, Qiubin

    2018-03-01

    The application of medical knowledge strongly affects the performance of intelligent diagnosis, and method of learning the weights of medical knowledge plays a substantial role in probabilistic graphical models (PGMs). The purpose of this study is to investigate a discriminative weight-learning method based on a medical knowledge network (MKN). We propose a training model called the maximum margin medical knowledge network (M 3 KN), which is strictly derived for calculating the weight of medical knowledge. Using the definition of a reasonable margin, the weight learning can be transformed into a margin optimization problem. To solve the optimization problem, we adopt a sequential minimal optimization (SMO) algorithm and the clique property of a Markov network. Ultimately, M 3 KN not only incorporates the inference ability of PGMs but also deals with high-dimensional logic knowledge. The experimental results indicate that M 3 KN obtains a higher F-measure score than the maximum likelihood learning algorithm of MKN for both Chinese Electronic Medical Records (CEMRs) and Blood Examination Records (BERs). Furthermore, the proposed approach is obviously superior to some classical machine learning algorithms for medical diagnosis. To adequately manifest the importance of domain knowledge, we numerically verify that the diagnostic accuracy of M 3 KN is gradually improved as the number of learned CEMRs increase, which contain important medical knowledge. Our experimental results show that the proposed method performs reliably for learning the weights of medical knowledge. M 3 KN outperforms other existing methods by achieving an F-measure of 0.731 for CEMRs and 0.4538 for BERs. This further illustrates that M 3 KN can facilitate the investigations of intelligent healthcare. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Conditions for Productive Learning in Network Learning Environments

    DEFF Research Database (Denmark)

    Ponti, M.; Dirckinck-Holmfeld, Lone; Lindström, B.

    2004-01-01

    The Kaleidoscope1 Jointly Executed Integrating Research Project (JEIRP) on Conditions for Productive Networked Learning Environments is developing and elaborating conceptual understandings of Computer Supported Collaborative Learning (CSCL) emphasizing the use of cross-cultural comparative......: Pedagogical design and the dialectics of the digital artefacts, the concept of collaboration, ethics/trust, identity and the role of scaffolding of networked learning environments.   The JEIRP is motivated by the fact that many networked learning environments in various European educational settings...... are designed without a deep understanding of the pedagogical, communicative and collaborative conditions embedded in networked learning. Despite the existence of good theoretical views pointing to a social understanding of learning, rather than a traditional individualistic and information processing approach...

  15. Machine learning for network-based malware detection

    DEFF Research Database (Denmark)

    Stevanovic, Matija

    and based on different, mutually complementary, principles of traffic analysis. The proposed approaches rely on machine learning algorithms (MLAs) for automated and resource-efficient identification of the patterns of malicious network traffic. We evaluated the proposed methods through extensive evaluations...... traffic that provides reliable and time-efficient labeling. Finally, the thesis outlines the opportunities for future work on realizing robust and effective detection solutions....

  16. Analysis Resilient Algorithm on Artificial Neural Network Backpropagation

    Science.gov (United States)

    Saputra, Widodo; Tulus; Zarlis, Muhammad; Widia Sembiring, Rahmat; Hartama, Dedy

    2017-12-01

    Prediction required by decision makers to anticipate future planning. Artificial Neural Network (ANN) Backpropagation is one of method. This method however still has weakness, for long training time. This is a reason to improve a method to accelerate the training. One of Artificial Neural Network (ANN) Backpropagation method is a resilient method. Resilient method of changing weights and bias network with direct adaptation process of weighting based on local gradient information from every learning iteration. Predicting data result of Istanbul Stock Exchange training getting better. Mean Square Error (MSE) value is getting smaller and increasing accuracy.

  17. A Collaborative Learning Network Approach to Improvement: The CUSP Learning Network.

    Science.gov (United States)

    Weaver, Sallie J; Lofthus, Jennifer; Sawyer, Melinda; Greer, Lee; Opett, Kristin; Reynolds, Catherine; Wyskiel, Rhonda; Peditto, Stephanie; Pronovost, Peter J

    2015-04-01

    Collaborative improvement networks draw on the science of collaborative organizational learning and communities of practice to facilitate peer-to-peer learning, coaching, and local adaption. Although significant improvements in patient safety and quality have been achieved through collaborative methods, insight regarding how collaborative networks are used by members is needed. Improvement Strategy: The Comprehensive Unit-based Safety Program (CUSP) Learning Network is a multi-institutional collaborative network that is designed to facilitate peer-to-peer learning and coaching specifically related to CUSP. Member organizations implement all or part of the CUSP methodology to improve organizational safety culture, patient safety, and care quality. Qualitative case studies developed by participating members examine the impact of network participation across three levels of analysis (unit, hospital, health system). In addition, results of a satisfaction survey designed to evaluate member experiences were collected to inform network development. Common themes across case studies suggest that members found value in collaborative learning and sharing strategies across organizational boundaries related to a specific improvement strategy. The CUSP Learning Network is an example of network-based collaborative learning in action. Although this learning network focuses on a particular improvement methodology-CUSP-there is clear potential for member-driven learning networks to grow around other methods or topic areas. Such collaborative learning networks may offer a way to develop an infrastructure for longer-term support of improvement efforts and to more quickly diffuse creative sustainment strategies.

  18. Transmission network expansion planning based on hybridization model of neural networks and harmony search algorithm

    Directory of Open Access Journals (Sweden)

    Mohammad Taghi Ameli

    2012-01-01

    Full Text Available Transmission Network Expansion Planning (TNEP is a basic part of power network planning that determines where, when and how many new transmission lines should be added to the network. So, the TNEP is an optimization problem in which the expansion purposes are optimized. Artificial Intelligence (AI tools such as Genetic Algorithm (GA, Simulated Annealing (SA, Tabu Search (TS and Artificial Neural Networks (ANNs are methods used for solving the TNEP problem. Today, by using the hybridization models of AI tools, we can solve the TNEP problem for large-scale systems, which shows the effectiveness of utilizing such models. In this paper, a new approach to the hybridization model of Probabilistic Neural Networks (PNNs and Harmony Search Algorithm (HSA was used to solve the TNEP problem. Finally, by considering the uncertain role of the load based on a scenario technique, this proposed model was tested on the Garver’s 6-bus network.

  19. VSMURF: A Novel Sliding Window Cleaning Algorithm for RFID Networks

    Directory of Open Access Journals (Sweden)

    He Xu

    2017-01-01

    Full Text Available Radio Frequency Identification (RFID is one of the key technologies of the Internet of Things (IoT and is used in many areas, such as mobile payments, public transportation, smart lock, and environment protection. However, the performance of RFID equipment can be easily affected by the surrounding environment, such as electronic productions and metal appliances. These can impose an impact on the RF signal, which makes the collection of RFID data unreliable. Usually, the unreliability of RFID source data includes three aspects: false negatives, false positives, and dirty data. False negatives are the key problem, as the probability of false positives and dirty data occurrence is relatively small. This paper proposes a novel sliding window cleaning algorithm called VSMURF, which is based on the traditional SMURF algorithm which combines the dynamic change of tags and the value analysis of confidence. Experimental results show that VSMURF algorithm performs better in most conditions and when the tag’s speed is low or high. In particular, if the velocity parameter is set to 2 m/epoch, our proposed VSMURF algorithm performs better than SMURF. The results also show that VSMURF algorithm has better performance than other algorithms in solving the problem of false negatives for RFID networks.

  20. Human matching behavior in social networks: an algorithmic perspective.

    Science.gov (United States)

    Coviello, Lorenzo; Franceschetti, Massimo; McCubbins, Mathew D; Paturi, Ramamohan; Vattani, Andrea

    2012-01-01

    We argue that algorithmic modeling is a powerful approach to understanding the collective dynamics of human behavior. We consider the task of pairing up individuals connected over a network, according to the following model: each individual is able to propose to match with and accept a proposal from a neighbor in the network; if a matched individual proposes to another neighbor or accepts another proposal, the current match will be broken; individuals can only observe whether their neighbors are currently matched but have no knowledge of the network topology or the status of other individuals; and all individuals have the common goal of maximizing the total number of matches. By examining the experimental data, we identify a behavioral principle called prudence, develop an algorithmic model, analyze its properties mathematically and by simulations, and validate the model with human subject experiments for various network sizes and topologies. Our results include i) a 1/2-approximate maximum matching is obtained in logarithmic time in the network size for bounded degree networks; ii) for any constant ε > 0, a (1 - ε)-approximate maximum matching is obtained in polynomial time, while obtaining a maximum matching can require an exponential time; and iii) convergence to a maximum matching is slower on preferential attachment networks than on small-world networks. These results allow us to predict that while humans can find a "good quality" matching quickly, they may be unable to find a maximum matching in feasible time. We show that the human subjects largely abide by prudence, and their collective behavior is closely tracked by the above predictions.

  1. Bioinspired evolutionary algorithm based for improving network coverage in wireless sensor networks.

    Science.gov (United States)

    Abbasi, Mohammadjavad; Bin Abd Latiff, Muhammad Shafie; Chizari, Hassan

    2014-01-01

    Wireless sensor networks (WSNs) include sensor nodes in which each node is able to monitor the physical area and send collected information to the base station for further analysis. The important key of WSNs is detection and coverage of target area which is provided by random deployment. This paper reviews and addresses various area detection and coverage problems in sensor network. This paper organizes many scenarios for applying sensor node movement for improving network coverage based on bioinspired evolutionary algorithm and explains the concern and objective of controlling sensor node coverage. We discuss area coverage and target detection model by evolutionary algorithm.

  2. Research on Joint Handoff Algorithm in Vehicles Networks

    Directory of Open Access Journals (Sweden)

    Yuming Bi

    2016-01-01

    Full Text Available With the communication services evolution from the fourth generation (4G to the fifth generation (5G, we are going to face diverse challenges from the new network systems. On the one hand, seamless handoff is expected to integrate universal access among various network mechanisms. On the other hand, a variety of 5G technologies will complement each other to provide ubiquitous high speed wireless connectivity. Because the current wireless network cannot support the handoff among Wireless Access for Vehicular Environment (WAVE, WiMAX, and LTE flexibly, the paper provides an advanced handoff algorithm to solve this problem. Firstly, the received signal strength is classified, and the vehicle speed and data rate under different channel conditions are optimized. Then, the optimal network is selected for handoff. Simulation results show that the proposed algorithm can well adapt to high speed environment, guarantee flexible and reasonable vehicles access to a variety of networks, and prevent ping-pong handoff and link access failure effectively.

  3. Neural-network-based voice-tracking algorithm

    Science.gov (United States)

    Baker, Mary; Stevens, Charise; Chaparro, Brennen; Paschall, Dwayne

    2002-11-01

    A voice-tracking algorithm was developed and tested for the purposes of electronically separating the voice signals of simultaneous talkers. Many individuals suffer from hearing disorders that often inhibit their ability to focus on a single speaker in a multiple speaker environment (the cocktail party effect). Digital hearing aid technology makes it possible to implement complex algorithms for speech processing in both the time and frequency domains. In this work, an average magnitude difference function (AMDF) was performed on mixed voice signals in order to determine the fundamental frequencies present in the signals. A time prediction neural network was trained to recognize normal human voice inflection patterns, including rising, falling, rising-falling, and falling-rising patterns. The neural network was designed to track the fundamental frequency of a single talker based on the training procedure. The output of the neural network can be used to design an active filter for speaker segregation. Tests were done using audio mixing of two to three speakers uttering short phrases. The AMDF function accurately identified the fundamental frequencies present in the signal. The neural network was tested using a single speaker uttering a short sentence. The network accurately tracked the fundamental frequency of the speaker.

  4. District Heating Network Design and Configuration Optimization with Genetic Algorithm

    DEFF Research Database (Denmark)

    Li, Hongwei; Svendsen, Svend

    2011-01-01

    the heating plant location is allowed to vary. The connection between the heat generation plant and the end users can be represented with mixed integer and the pipe friction and heat loss formulations are non-linear. In order to find the optimal DH distribution pipeline configuration, the genetic algorithm...... which handles the mixed integer nonlinear programming problem was chosen. The network configuration was represented through binary and integer encoding and was optimized in terms of the net present cost (NPC). The optimization results indicated that the optimal DH network configuration is determined...

  5. Any Two Learning Algorithms Are (Almost) Exactly Identical

    Science.gov (United States)

    Wolpert, David H.

    2000-01-01

    This paper shows that if one is provided with a loss function, it can be used in a natural way to specify a distance measure quantifying the similarity of any two supervised learning algorithms, even non-parametric algorithms. Intuitively, this measure gives the fraction of targets and training sets for which the expected performance of the two algorithms differs significantly. Bounds on the value of this distance are calculated for the case of binary outputs and 0-1 loss, indicating that any two learning algorithms are almost exactly identical for such scenarios. As an example, for any two algorithms A and B, even for small input spaces and training sets, for less than 2e(-50) of all targets will the difference between A's and B's generalization performance of exceed 1%. In particular, this is true if B is bagging applied to A, or boosting applied to A. These bounds can be viewed alternatively as telling us, for example, that the simple English phrase 'I expect that algorithm A will generalize from the training set with an accuracy of at least 75% on the rest of the target' conveys 20,000 bytes of information concerning the target. The paper ends by discussing some of the subtleties of extending the distance measure to give a full (non-parametric) differential geometry of the manifold of learning algorithms.

  6. Machine learning of network metrics in ATLAS Distributed Data Management

    Science.gov (United States)

    Lassnig, Mario; Toler, Wesley; Vamosi, Ralf; Bogado, Joaquin; ATLAS Collaboration

    2017-10-01

    The increasing volume of physics data poses a critical challenge to the ATLAS experiment. In anticipation of high luminosity physics, automation of everyday data management tasks has become necessary. Previously many of these tasks required human decision-making and operation. Recent advances in hardware and software have made it possible to entrust more complicated duties to automated systems using models trained by machine learning algorithms. In this contribution we show results from one of our ongoing automation efforts that focuses on network metrics. First, we describe our machine learning framework built atop the ATLAS Analytics Platform. This framework can automatically extract and aggregate data, train models with various machine learning algorithms, and eventually score the resulting models and parameters. Second, we use these models to forecast metrics relevant for networkaware job scheduling and data brokering. We show the characteristics of the data and evaluate the forecasting accuracy of our models.

  7. Distributed Multitarget Probabilistic Coverage Control Algorithm for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Ying Tian

    2014-01-01

    Full Text Available This paper is concerned with the problem of multitarget coverage based on probabilistic detection model. Coverage configuration is an effective method to alleviate the energy-limitation problem of sensors. Firstly, considering the attenuation of node’s sensing ability, the target probabilistic coverage problem is defined and formalized, which is based on Neyman-Peason probabilistic detection model. Secondly, in order to turn off redundant sensors, a simplified judging rule is derived, which makes the probabilistic coverage judgment execute on each node locally. Thirdly, a distributed node schedule scheme is proposed for implementing the distributed algorithm. Simulation results show that this algorithm is robust to the change of network size, and when compared with the physical coverage algorithm, it can effectively minimize the number of active sensors, which guarantees all the targets γ-covered.

  8. Energy Efficient Distributed Fault Identification Algorithm in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Meenakshi Panda

    2014-01-01

    Full Text Available A distributed fault identification algorithm is proposed here to find both hard and soft faulty sensor nodes present in wireless sensor networks. The algorithm is distributed, self-detectable, and can detect the most common byzantine faults such as stuck at zero, stuck at one, and random data. In the proposed approach, each sensor node gathered the observed data from the neighbors and computed the mean to check whether faulty sensor node is present or not. If a node found the presence of faulty sensor node, then compares observed data with the data of the neighbors and predict probable fault status. The final fault status is determined by diffusing the fault information from the neighbors. The accuracy and completeness of the algorithm are verified with the help of statistical model of the sensors data. The performance is evaluated in terms of detection accuracy, false alarm rate, detection latency and message complexity.

  9. Functional clustering algorithm for the analysis of dynamic network data

    Science.gov (United States)

    Feldt, S.; Waddell, J.; Hetrick, V. L.; Berke, J. D.; Żochowski, M.

    2009-05-01

    We formulate a technique for the detection of functional clusters in discrete event data. The advantage of this algorithm is that no prior knowledge of the number of functional groups is needed, as our procedure progressively combines data traces and derives the optimal clustering cutoff in a simple and intuitive manner through the use of surrogate data sets. In order to demonstrate the power of this algorithm to detect changes in network dynamics and connectivity, we apply it to both simulated neural spike train data and real neural data obtained from the mouse hippocampus during exploration and slow-wave sleep. Using the simulated data, we show that our algorithm performs better than existing methods. In the experimental data, we observe state-dependent clustering patterns consistent with known neurophysiological processes involved in memory consolidation.

  10. NML Computation Algorithms for Tree-Structured Multinomial Bayesian Networks

    Directory of Open Access Journals (Sweden)

    Kontkanen Petri

    2007-01-01

    Full Text Available Typical problems in bioinformatics involve large discrete datasets. Therefore, in order to apply statistical methods in such domains, it is important to develop efficient algorithms suitable for discrete data. The minimum description length (MDL principle is a theoretically well-founded, general framework for performing statistical inference. The mathematical formalization of MDL is based on the normalized maximum likelihood (NML distribution, which has several desirable theoretical properties. In the case of discrete data, straightforward computation of the NML distribution requires exponential time with respect to the sample size, since the definition involves a sum over all the possible data samples of a fixed size. In this paper, we first review some existing algorithms for efficient NML computation in the case of multinomial and naive Bayes model families. Then we proceed by extending these algorithms to more complex, tree-structured Bayesian networks.

  11. Improved Differential Evolution Algorithm for Wireless Sensor Network Coverage Optimization

    Directory of Open Access Journals (Sweden)

    Xing Xu

    2014-04-01

    Full Text Available In order to serve for the ecological monitoring efficiency of Poyang Lake, an improved hybrid algorithm, mixed with differential evolution and particle swarm optimization, is proposed and applied to optimize the coverage problem of wireless sensor network. And then, the affect of the population size and the number of iterations on the coverage performance are both discussed and analyzed. The four kinds of statistical results about the coverage rate are obtained through lots of simulation experiments.

  12. Modeling the Swift Bat Trigger Algorithm with Machine Learning

    Science.gov (United States)

    Graff, Philip B.; Lien, Amy Y.; Baker, John G.; Sakamoto, Takanori

    2016-01-01

    To draw inferences about gamma-ray burst (GRB) source populations based on Swift observations, it is essential to understand the detection efficiency of the Swift burst alert telescope (BAT). This study considers the problem of modeling the Swift / BAT triggering algorithm for long GRBs, a computationally expensive procedure, and models it using machine learning algorithms. A large sample of simulated GRBs from Lien et al. is used to train various models: random forests, boosted decision trees (with AdaBoost), support vector machines, and artificial neural networks. The best models have accuracies of greater than or equal to 97 percent (less than or equal to 3 percent error), which is a significant improvement on a cut in GRB flux, which has an accuracy of 89.6 percent (10.4 percent error). These models are then used to measure the detection efficiency of Swift as a function of redshift z, which is used to perform Bayesian parameter estimation on the GRB rate distribution. We find a local GRB rate density of n (sub 0) approaching 0.48 (sup plus 0.41) (sub minus 0.23) per cubic gigaparsecs per year with power-law indices of n (sub 1) approaching 1.7 (sup plus 0.6) (sub minus 0.5) and n (sub 2) approaching minus 5.9 (sup plus 5.7) (sub minus 0.1) for GRBs above and below a break point of z (redshift) (sub 1) approaching 6.8 (sup plus 2.8) (sub minus 3.2). This methodology is able to improve upon earlier studies by more accurately modeling Swift detection and using this for fully Bayesian model fitting.

  13. Reconstructing Genetic Regulatory Networks Using Two-Step Algorithms with the Differential Equation Models of Neural Networks.

    Science.gov (United States)

    Chen, Chi-Kan

    2017-07-26

    The identification of genetic regulatory networks (GRNs) provides insights into complex cellular processes. A class of recurrent neural networks (RNNs) captures the dynamics of GRN. Algorithms combining the RNN and machine learning schemes were proposed to reconstruct small-scale GRNs using gene expression time series. We present new GRN reconstruction methods with neural networks. The RNN is extended to a class of recurrent multilayer perceptrons (RMLPs) with latent nodes. Our methods contain two steps: the edge rank assignment step and the network construction step. The former assigns ranks to all possible edges by a recursive procedure based on the estimated weights of wires of RNN/RMLP (RE RNN /RE RMLP ), and the latter constructs a network consisting of top-ranked edges under which the optimized RNN simulates the gene expression time series. The particle swarm optimization (PSO) is applied to optimize the parameters of RNNs and RMLPs in a two-step algorithm. The proposed RE RNN -RNN and RE RMLP -RNN algorithms are tested on synthetic and experimental gene expression time series of small GRNs of about 10 genes. The experimental time series are from the studies of yeast cell cycle regulated genes and E. coli DNA repair genes. The unstable estimation of RNN using experimental time series having limited data points can lead to fairly arbitrary predicted GRNs. Our methods incorporate RNN and RMLP into a two-step structure learning procedure. Results show that the RE RMLP using the RMLP with a suitable number of latent nodes to reduce the parameter dimension often result in more accurate edge ranks than the RE RNN using the regularized RNN on short simulated time series. Combining by a weighted majority voting rule the networks derived by the RE RMLP -RNN using different numbers of latent nodes in step one to infer the GRN, the method performs consistently and outperforms published algorithms for GRN reconstruction on most benchmark time series. The framework of two

  14. Patch Based Multiple Instance Learning Algorithm for Object Tracking.

    Science.gov (United States)

    Wang, Zhenjie; Wang, Lijia; Zhang, Hua

    2017-01-01

    To deal with the problems of illumination changes or pose variations and serious partial occlusion, patch based multiple instance learning (P-MIL) algorithm is proposed. The algorithm divides an object into many blocks. Then, the online MIL algorithm is applied on each block for obtaining strong classifier. The algorithm takes account of both the average classification score and classification scores of all the blocks for detecting the object. In particular, compared with the whole object based MIL algorithm, the P-MIL algorithm detects the object according to the unoccluded patches when partial occlusion occurs. After detecting the object, the learning rates for updating weak classifiers' parameters are adaptively tuned. The classifier updating strategy avoids overupdating and underupdating the parameters. Finally, the proposed method is compared with other state-of-the-art algorithms on several classical videos. The experiment results illustrate that the proposed method performs well especially in case of illumination changes or pose variations and partial occlusion. Moreover, the algorithm realizes real-time object tracking.

  15. MAC Protocol for Ad Hoc Networks Using a Genetic Algorithm

    Science.gov (United States)

    Elizarraras, Omar; Panduro, Marco; Méndez, Aldo L.

    2014-01-01

    The problem of obtaining the transmission rate in an ad hoc network consists in adjusting the power of each node to ensure the signal to interference ratio (SIR) and the energy required to transmit from one node to another is obtained at the same time. Therefore, an optimal transmission rate for each node in a medium access control (MAC) protocol based on CSMA-CDMA (carrier sense multiple access-code division multiple access) for ad hoc networks can be obtained using evolutionary optimization. This work proposes a genetic algorithm for the transmission rate election considering a perfect power control, and our proposition achieves improvement of 10% compared with the scheme that handles the handshaking phase to adjust the transmission rate. Furthermore, this paper proposes a genetic algorithm that solves the problem of power combining, interference, data rate, and energy ensuring the signal to interference ratio in an ad hoc network. The result of the proposed genetic algorithm has a better performance (15%) compared to the CSMA-CDMA protocol without optimizing. Therefore, we show by simulation the effectiveness of the proposed protocol in terms of the throughput. PMID:25140339

  16. Classification of ETM+ Remote Sensing Image Based on Hybrid Algorithm of Genetic Algorithm and Back Propagation Neural Network

    Directory of Open Access Journals (Sweden)

    Haisheng Song

    2013-01-01

    Full Text Available The back propagation neural network (BPNN algorithm can be used as a supervised classification in the processing of remote sensing image classification. But its defects are obvious: falling into the local minimum value easily, slow convergence speed, and being difficult to determine intermediate hidden layer nodes. Genetic algorithm (GA has the advantages of global optimization and being not easy to fall into local minimum value, but it has the disadvantage of poor local searching capability. This paper uses GA to generate the initial structure of BPNN. Then, the stable, efficient, and fast BP classification network is gotten through making fine adjustments on the improved BP algorithm. Finally, we use the hybrid algorithm to execute classification on remote sensing image and compare it with the improved BP algorithm and traditional maximum likelihood classification (MLC algorithm. Results of experiments show that the hybrid algorithm outperforms improved BP algorithm and MLC algorithm.

  17. Stochastic Variational Learning in Recurrent Spiking Networks

    Directory of Open Access Journals (Sweden)

    Danilo eJimenez Rezende

    2014-04-01

    Full Text Available The ability to learn and perform statistical inference with biologically plausible recurrent network of spiking neurons is an important step towards understanding perception and reasoning. Here we derive and investigate a new learning rule for recurrent spiking networks with hidden neurons, combining principles from variational learning and reinforcement learning. Our network defines a generative model over spike train histories and the derived learning rule has the form of a local Spike Timing Dependent Plasticity rule modulated by global factors (neuromodulators conveying information about ``novelty on a statistically rigorous ground.Simulations show that our model is able to learn bothstationary and non-stationary patterns of spike trains.We also propose one experiment that could potentially be performed with animals in order to test the dynamics of the predicted novelty signal.

  18. Construction cost estimation of spherical storage tanks: artificial neural networks and hybrid regression—GA algorithms

    Science.gov (United States)

    Arabzadeh, Vida; Niaki, S. T. A.; Arabzadeh, Vahid

    2017-10-01

    One of the most important processes in the early stages of construction projects is to estimate the cost involved. This process involves a wide range of uncertainties, which make it a challenging task. Because of unknown issues, using the experience of the experts or looking for similar cases are the conventional methods to deal with cost estimation. The current study presents data-driven methods for cost estimation based on the application of artificial neural network (ANN) and regression models. The learning algorithms of the ANN are the Levenberg-Marquardt and the Bayesian regulated. Moreover, regression models are hybridized with a genetic algorithm to obtain better estimates of the coefficients. The methods are applied in a real case, where the input parameters of the models are assigned based on the key issues involved in a spherical tank construction. The results reveal that while a high correlation between the estimated cost and the real cost exists; both ANNs could perform better than the hybridized regression models. In addition, the ANN with the Levenberg-Marquardt learning algorithm (LMNN) obtains a better estimation than the ANN with the Bayesian-regulated learning algorithm (BRNN). The correlation between real data and estimated values is over 90%, while the mean square error is achieved around 0.4. The proposed LMNN model can be effective to reduce uncertainty and complexity in the early stages of the construction project.

  19. A hybrid learning scheme combining EM and MASMOD algorithms for fuzzy local linearization modeling.

    Science.gov (United States)

    Gan, Q; Harris, C J

    2001-01-01

    Fuzzy local linearization (FLL) is a useful divide-and-conquer method for coping with complex problems such as modeling unknown nonlinear systems from data for state estimation and control. Based on a probabilistic interpretation of FLL, the paper proposes a hybrid learning scheme for FLL modeling, which uses a modified adaptive spline modeling (MASMOD) algorithm to construct the antecedent parts (membership functions) in the FLL model, and an expectation-maximization (EM) algorithm to parameterize the consequent parts (local linear models). The hybrid method not only has an approximation ability as good as most neuro-fuzzy network models, but also produces a parsimonious network structure (gain from MASMOD) and provides covariance information about the model error (gain from EM) which is valuable in applications such as state estimation and control. Numerical examples on nonlinear time-series analysis and nonlinear trajectory estimation using FLL models are presented to validate the derived algorithm.

  20. Network Learning and Innovation in SME Formal Networks

    Directory of Open Access Journals (Sweden)

    Jivka Deiters

    2013-02-01

    Full Text Available The driver for this paper is the need to better understand the potential for learning and innovation that networks canprovide especially for small and medium sized enterprises (SMEs which comprise by far the majority of enterprises in the food sector. With the challenges the food sector is facing in the near future, learning and innovation or more focused, as it is being discussed in the paper, ‘learning for innovation’ are not just opportunities but pre‐conditions for the sustainability of the sector. Network initiatives that could provide appropriate support involve social interaction and knowledge exchange, learning, competence development, and coordination (organization and management of implementation. The analysis identifies case studies in any of these orientations which serve different stages of the innovation process: invention and implementation. The variety of network case studies cover networks linked to a focus group for training, research, orconsulting, networks dealing with focused market oriented product or process development, promotional networks, and networks for open exchange and social networking.

  1. Robustness of learning algorithms using hinge loss with outlier indicators.

    Science.gov (United States)

    Kanamori, Takafumi; Fujiwara, Shuhei; Takeda, Akiko

    2017-10-01

    We propose a unified formulation of robust learning methods for classification and regression problems. In the learning methods, the hinge loss is used with outlier indicators in order to detect outliers in the observed data. To analyze the robustness property, we evaluate the breakdown point of the learning methods in the situation that the outlier ratio is not necessarily small. Although minimization of the hinge loss with outlier indicators is a non-convex optimization problem, we prove that any local optimal solution of our learning algorithms has the robustness property. The theoretical findings are confirmed in numerical experiments. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. MODA: an efficient algorithm for network motif discovery in biological networks.

    Science.gov (United States)

    Omidi, Saeed; Schreiber, Falk; Masoudi-Nejad, Ali

    2009-10-01

    In recent years, interest has been growing in the study of complex networks. Since Erdös and Rényi (1960) proposed their random graph model about 50 years ago, many researchers have investigated and shaped this field. Many indicators have been proposed to assess the global features of networks. Recently, an active research area has developed in studying local features named motifs as the building blocks of networks. Unfortunately, network motif discovery is a computationally hard problem and finding rather large motifs (larger than 8 nodes) by means of current algorithms is impractical as it demands too much computational effort. In this paper, we present a new algorithm (MODA) that incorporates techniques such as a pattern growth approach for extracting larger motifs efficiently. We have tested our algorithm and found it able to identify larger motifs with more than 8 nodes more efficiently than most of the current state-of-the-art motif discovery algorithms. While most of the algorithms rely on induced subgraphs as motifs of the networks, MODA is able to extract both induced and non-induced subgraphs simultaneously. The MODA source code is freely available at: http://LBB.ut.ac.ir/Download/LBBsoft/MODA/

  3. A reverse engineering algorithm for neural networks, applied to the subthalamopallidal network of basal ganglia.

    Science.gov (United States)

    Floares, Alexandru George

    2008-01-01

    Modeling neural networks with ordinary differential equations systems is a sensible approach, but also very difficult. This paper describes a new algorithm based on linear genetic programming which can be used to reverse engineer neural networks. The RODES algorithm automatically discovers the structure of the network, including neural connections, their signs and strengths, estimates its parameters, and can even be used to identify the biophysical mechanisms involved. The algorithm is tested on simulated time series data, generated using a realistic model of the subthalamopallidal network of basal ganglia. The resulting ODE system is highly accurate, and results are obtained in a matter of minutes. This is because the problem of reverse engineering a system of coupled differential equations is reduced to one of reverse engineering individual algebraic equations. The algorithm allows the incorporation of common domain knowledge to restrict the solution space. To our knowledge, this is the first time a realistic reverse engineering algorithm based on linear genetic programming has been applied to neural networks.

  4. Machine-Learning Algorithms to Code Public Health Spending Accounts.

    Science.gov (United States)

    Brady, Eoghan S; Leider, Jonathon P; Resnick, Beth A; Alfonso, Y Natalia; Bishai, David

    Government public health expenditure data sets require time- and labor-intensive manipulation to summarize results that public health policy makers can use. Our objective was to compare the performances of machine-learning algorithms with manual classification of public health expenditures to determine if machines could provide a faster, cheaper alternative to manual classification. We used machine-learning algorithms to replicate the process of manually classifying state public health expenditures, using the standardized public health spending categories from the Foundational Public Health Services model and a large data set from the US Census Bureau. We obtained a data set of 1.9 million individual expenditure items from 2000 to 2013. We collapsed these data into 147 280 summary expenditure records, and we followed a standardized method of manually classifying each expenditure record as public health, maybe public health, or not public health. We then trained 9 machine-learning algorithms to replicate the manual process. We calculated recall, precision, and coverage rates to measure the performance of individual and ensembled algorithms. Compared with manual classification, the machine-learning random forests algorithm produced 84% recall and 91% precision. With algorithm ensembling, we achieved our target criterion of 90% recall by using a consensus ensemble of ≥6 algorithms while still retaining 93% coverage, leaving only 7% of the summary expenditure records unclassified. Machine learning can be a time- and cost-saving tool for estimating public health spending in the United States. It can be used with standardized public health spending categories based on the Foundational Public Health Services model to help parse public health expenditure information from other types of health-related spending, provide data that are more comparable across public health organizations, and evaluate the impact of evidence-based public health resource allocation.

  5. Quantitative learning strategies based on word networks

    Science.gov (United States)

    Zhao, Yue-Tian-Yi; Jia, Zi-Yang; Tang, Yong; Xiong, Jason Jie; Zhang, Yi-Cheng

    2018-02-01

    Learning English requires a considerable effort, but the way that vocabulary is introduced in textbooks is not optimized for learning efficiency. With the increasing population of English learners, learning process optimization will have significant impact and improvement towards English learning and teaching. The recent developments of big data analysis and complex network science provide additional opportunities to design and further investigate the strategies in English learning. In this paper, quantitative English learning strategies based on word network and word usage information are proposed. The strategies integrate the words frequency with topological structural information. By analyzing the influence of connected learned words, the learning weights for the unlearned words and dynamically updating of the network are studied and analyzed. The results suggest that quantitative strategies significantly improve learning efficiency while maintaining effectiveness. Especially, the optimized-weight-first strategy and segmented strategies outperform other strategies. The results provide opportunities for researchers and practitioners to reconsider the way of English teaching and designing vocabularies quantitatively by balancing the efficiency and learning costs based on the word network.

  6. A noise tolerant fine tuning algorithm for the Naïve Bayesian learning algorithm

    Directory of Open Access Journals (Sweden)

    Khalil El Hindi

    2014-07-01

    Full Text Available This work improves on the FTNB algorithm to make it more tolerant to noise. The FTNB algorithm augments the Naïve Bayesian (NB learning algorithm with a fine-tuning stage in an attempt to find better estimations of the probability terms involved. The fine-tuning stage has proved to be effective in improving the classification accuracy of the NB; however, it makes the NB algorithm more sensitive to noise in a training set. This work presents several modifications of the fine tuning stage to make it more tolerant to noise. Our empirical results using 47 data sets indicate that the proposed methods greatly enhance the algorithm tolerance to noise. Furthermore, one of the proposed methods improved the performance of the fine tuning method on many noise-free data sets.

  7. Learning in innovation networks: Some simulation experiments

    Science.gov (United States)

    Gilbert, Nigel; Ahrweiler, Petra; Pyka, Andreas

    2007-05-01

    According to the organizational learning literature, the greatest competitive advantage a firm has is its ability to learn. In this paper, a framework for modeling learning competence in firms is presented to improve the understanding of managing innovation. Firms with different knowledge stocks attempt to improve their economic performance by engaging in radical or incremental innovation activities and through partnerships and networking with other firms. In trying to vary and/or to stabilize their knowledge stocks by organizational learning, they attempt to adapt to environmental requirements while the market strongly selects on the results. The simulation experiments show the impact of different learning activities, underlining the importance of innovation and learning.

  8. An Extended Neo-Fuzzy Neuron and its Adaptive Learning Algorithm

    OpenAIRE

    Bodyanskiy, Yevgeniy V.; Tyshchenko, Oleksii K.; Kopaliani, Daria S.

    2016-01-01

    A modification of the neo-fuzzy neuron is proposed (an extended neo-fuzzy neuron (ENFN)) that is characterized by improved approximating properties. An adaptive learning algorithm is proposed that has both tracking and smoothing properties and solves prediction, filtering and smoothing tasks of non-stationary “noisy” stochastic and chaotic signals. An ENFN distinctive feature is its computational simplicity compared to other artificial neural networks and neuro-fuzzy systems.

  9. Secure Multicast Routing Algorithm for Wireless Mesh Networks

    Directory of Open Access Journals (Sweden)

    Rakesh Matam

    2016-01-01

    Full Text Available Multicast is an indispensable communication technique in wireless mesh network (WMN. Many applications in WMN including multicast TV, audio and video conferencing, and multiplayer social gaming use multicast transmission. On the other hand, security in multicast transmissions is crucial, without which the network services are significantly disrupted. Existing secure routing protocols that address different active attacks are still vulnerable due to subtle nature of flaws in protocol design. Moreover, existing secure routing protocols assume that adversarial nodes cannot share an out-of-band communication channel which rules out the possibility of wormhole attack. In this paper, we propose SEMRAW (SEcure Multicast Routing Algorithm for Wireless mesh network that is resistant against all known active threats including wormhole attack. SEMRAW employs digital signatures to prevent a malicious node from gaining illegitimate access to the message contents. Security of SEMRAW is evaluated using the simulation paradigm approach.

  10. Evolving neural networks using a genetic algorithm for heartbeat classification.

    Science.gov (United States)

    Sekkal, Mansouria; Chikh, Mohamed Amine; Settouti, Nesma

    2011-07-01

    This study investigates the effectiveness of a genetic algorithm (GA) evolved neural network (NN) classifier and its application to the classification of premature ventricular contraction (PVC) beats. As there is no standard procedure to determine the network structure for complicated cases, generally the design of the NN would be dependent on the user's experience. To prevent this problem, we propose a neural classifier that uses a GA for the determination of optimal connections between neurons for better recognition. The MIT-BIH arrhythmia database is employed to evaluate its accuracy. First, the topology of the NN was determined using the trial and error method. Second, the genetic operators were carefully designed to optimize the neural network structure. Performance and accuracy of the two techniques are presented and compared. Copyright © 2011 Informa UK, Ltd.

  11. Learning Sorting Algorithms through Visualization Construction

    Science.gov (United States)

    Cetin, Ibrahim; Andrews-Larson, Christine

    2016-01-01

    Recent increased interest in computational thinking poses an important question to researchers: What are the best ways to teach fundamental computing concepts to students? Visualization is suggested as one way of supporting student learning. This mixed-method study aimed to (i) examine the effect of instruction in which students constructed…

  12. Algorithms for energy efficiency in wireless sensor networks

    Energy Technology Data Exchange (ETDEWEB)

    Busse, M.

    2007-01-21

    The recent advances in microsensor and semiconductor technology have opened a new field within computer science: the networking of small-sized sensors which are capable of sensing, processing, and communicating. Such wireless sensor networks offer new applications in the areas of habitat and environment monitoring, disaster control and operation, military and intelligence control, object tracking, video surveillance, traffic control, as well as in health care and home automation. It is likely that the deployed sensors will be battery-powered, which will limit the energy capacity significantly. Thus, energy efficiency becomes one of the main challenges that need to be taken into account, and the design of energy-efficient algorithms is a major contribution of this thesis. As the wireless communication in the network is one of the main energy consumers, we first consider in detail the characteristics of wireless communication. By using the embedded sensor board (ESB) platform recently developed by the Free University of Berlin, we analyze the means of forward error correction and propose an appropriate resync mechanism, which improves the communication between two ESB nodes substantially. Afterwards, we focus on the forwarding of data packets through the network. We present the algorithms energy-efficient forwarding (EEF), lifetime-efficient forwarding (LEF), and energy-efficient aggregation forwarding (EEAF). While EEF is designed to maximize the number of data bytes delivered per energy unit, LEF additionally takes into account the residual energy of forwarding nodes. In so doing, LEF further prolongs the lifetime of the network. Energy savings due to data aggregation and in-network processing are exploited by EEAF. Besides single-link forwarding, in which data packets are sent to only one forwarding node, we also study the impact of multi-link forwarding, which exploits the broadcast characteristics of the wireless medium by sending packets to several (potential

  13. An Orthogonal Evolutionary Algorithm With Learning Automata for Multiobjective Optimization.

    Science.gov (United States)

    Dai, Cai; Wang, Yuping; Ye, Miao; Xue, Xingsi; Liu, Hailin

    2016-12-01

    Research on multiobjective optimization problems becomes one of the hottest topics of intelligent computation. In order to improve the search efficiency of an evolutionary algorithm and maintain the diversity of solutions, in this paper, the learning automata (LA) is first used for quantization orthogonal crossover (QOX), and a new fitness function based on decomposition is proposed to achieve these two purposes. Based on these, an orthogonal evolutionary algorithm with LA for complex multiobjective optimization problems with continuous variables is proposed. The experimental results show that in continuous states, the proposed algorithm is able to achieve accurate Pareto-optimal sets and wide Pareto-optimal fronts efficiently. Moreover, the comparison with the several existing well-known algorithms: nondominated sorting genetic algorithm II, decomposition-based multiobjective evolutionary algorithm, decomposition-based multiobjective evolutionary algorithm with an ensemble of neighborhood sizes, multiobjective optimization by LA, and multiobjective immune algorithm with nondominated neighbor-based selection, on 15 multiobjective benchmark problems, shows that the proposed algorithm is able to find more accurate and evenly distributed Pareto-optimal fronts than the compared ones.

  14. An improved algorithm for learning long-term dependency problems in adaptive processing of data structures.

    Science.gov (United States)

    Cho, Siu-Yeung; Chi, Zheru; Siu, Wan-Chi; Tsoi, Ah Chung

    2003-01-01

    Many researchers have explored the use of neural-network representations for the adaptive processing of data structures. One of the most popular learning formulations of data structure processing is backpropagation through structure (BPTS). The BPTS algorithm has been successful applied to a number of learning tasks that involve structural patterns such as logo and natural scene classification. The main limitations of the BPTS algorithm are attributed to slow convergence speed and the long-term dependency problem for the adaptive processing of data structures. In this paper, an improved algorithm is proposed to solve these problems. The idea of this algorithm is to optimize the free learning parameters of the neural network in the node representation by using least-squares-based optimization methods in a layer-by-layer fashion. Not only can fast convergence speed be achieved, but the long-term dependency problem can also be overcome since the vanishing of gradient information is avoided when our approach is applied to very deep tree structures.

  15. Learning Search Algorithms: An Educational View

    Directory of Open Access Journals (Sweden)

    Ales Janota

    2014-12-01

    Full Text Available Artificial intelligence methods find their practical usage in many applications including maritime industry. The paper concentrates on the methods of uninformed and informed search, potentially usable in solving of complex problems based on the state space representation. The problem of introducing the search algorithms to newcomers has its technical and psychological dimensions. The authors show how it is possible to cope with both of them through design and use of specialized authoring systems. A typical example of searching a path through the maze is used to demonstrate how to test, observe and compare properties of various search strategies. Performance of search methods is evaluated based on the common criteria.

  16. Night-Time Vehicle Detection Algorithm Based on Visual Saliency and Deep Learning

    Directory of Open Access Journals (Sweden)

    Yingfeng Cai

    2016-01-01

    Full Text Available Night vision systems get more and more attention in the field of automotive active safety field. In this area, a number of researchers have proposed far-infrared sensor based night-time vehicle detection algorithm. However, existing algorithms have low performance in some indicators such as the detection rate and processing time. To solve this problem, we propose a far-infrared image vehicle detection algorithm based on visual saliency and deep learning. Firstly, most of the nonvehicle pixels will be removed with visual saliency computation. Then, vehicle candidate will be generated by using prior information such as camera parameters and vehicle size. Finally, classifier trained with deep belief networks will be applied to verify the candidates generated in last step. The proposed algorithm is tested in around 6000 images and achieves detection rate of 92.3% and processing time of 25 Hz which is better than existing methods.

  17. Brain Networks of Explicit and Implicit Learning

    Science.gov (United States)

    Yang, Jing; Li, Ping

    2012-01-01

    Are explicit versus implicit learning mechanisms reflected in the brain as distinct neural structures, as previous research indicates, or are they distinguished by brain networks that involve overlapping systems with differential connectivity? In this functional MRI study we examined the neural correlates of explicit and implicit learning of artificial grammar sequences. Using effective connectivity analyses we found that brain networks of different connectivity underlie the two types of learning: while both processes involve activation in a set of cortical and subcortical structures, explicit learners engage a network that uses the insula as a key mediator whereas implicit learners evoke a direct frontal-striatal network. Individual differences in working memory also differentially impact the two types of sequence learning. PMID:22952624

  18. Research on wind field algorithm of wind lidar based on BP neural network and grey prediction

    Science.gov (United States)

    Chen, Yong; Chen, Chun-Li; Luo, Xiong; Zhang, Yan; Yang, Ze-hou; Zhou, Jie; Shi, Xiao-ding; Wang, Lei

    2018-01-01

    This paper uses the BP neural network and grey algorithm to forecast and study radar wind field. In order to reduce the residual error in the wind field prediction which uses BP neural network and grey algorithm, calculating the minimum value of residual error function, adopting the residuals of the gray algorithm trained by BP neural network, using the trained network model to forecast the residual sequence, using the predicted residual error sequence to modify the forecast sequence of the grey algorithm. The test data show that using the grey algorithm modified by BP neural network can effectively reduce the residual value and improve the prediction precision.

  19. A self-learning call admission control scheme for CDMA cellular networks.

    Science.gov (United States)

    Liu, Derong; Zhang, Yi; Zhang, Huaguang

    2005-09-01

    In the present paper, a call admission control scheme that can learn from the network environment and user behavior is developed for code division multiple access (CDMA) cellular networks that handle both voice and data services. The idea is built upon a novel learning control architecture with only a single module instead of two or three modules in adaptive critic designs (ACDs). The use of adaptive critic approach for call admission control in wireless cellular networks is new. The call admission controller can perform learning in real-time as well as in offline environments and the controller improves its performance as it gains more experience. Another important contribution in the present work is the choice of utility function for the present self-learning control approach which makes the present learning process much more efficient than existing learning control methods. The performance of our algorithm will be shown through computer simulation and compared with existing algorithms.

  20. Prolonging the Lifetime of Wireless Sensor Networks Interconnected to Fixed Network Using Hierarchical Energy Tree Based Routing Algorithm

    Directory of Open Access Journals (Sweden)

    M. Kalpana

    2014-01-01

    Full Text Available This research work proposes a mathematical model for the lifetime of wireless sensor networks (WSN. It also proposes an energy efficient routing algorithm for WSN called hierarchical energy tree based routing algorithm (HETRA based on hierarchical energy tree constructed using the available energy in each node. The energy efficiency is further augmented by reducing the packet drops using exponential congestion control algorithm (TCP/EXP. The algorithms are evaluated in WSNs interconnected to fixed network with seven distribution patterns, simulated in ns2 and compared with the existing algorithms based on the parameters such as number of data packets, throughput, network lifetime, and data packets average network lifetime product. Evaluation and simulation results show that the combination of HETRA and TCP/EXP maximizes longer network lifetime in all the patterns. The lifetime of the network with HETRA algorithm has increased approximately 3.2 times that of the network implemented with AODV.

  1. Prolonging the lifetime of wireless sensor networks interconnected to fixed network using hierarchical energy tree based routing algorithm.

    Science.gov (United States)

    Kalpana, M; Dhanalakshmi, R; Parthiban, P

    2014-01-01

    This research work proposes a mathematical model for the lifetime of wireless sensor networks (WSN). It also proposes an energy efficient routing algorithm for WSN called hierarchical energy tree based routing algorithm (HETRA) based on hierarchical energy tree constructed using the available energy in each node. The energy efficiency is further augmented by reducing the packet drops using exponential congestion control algorithm (TCP/EXP). The algorithms are evaluated in WSNs interconnected to fixed network with seven distribution patterns, simulated in ns2 and compared with the existing algorithms based on the parameters such as number of data packets, throughput, network lifetime, and data packets average network lifetime product. Evaluation and simulation results show that the combination of HETRA and TCP/EXP maximizes longer network lifetime in all the patterns. The lifetime of the network with HETRA algorithm has increased approximately 3.2 times that of the network implemented with AODV.

  2. Electricity price prediction: a comparison of machine learning algorithms

    OpenAIRE

    Wormstrand, Øystein

    2011-01-01

    In this master thesis we have worked with seven different machine learning methods to discover which algorithm is best suited for predicting the next-day electricity price for the Norwegian price area NO1 on Nord Pool Spot. Based on historical price, consumption, weather and reservoir data, we have created our own data sets. Data from 2001 through 2009 was gathered, where the last one third of the period was used for testing. We have tested our selected machine learning methods ...

  3. A new constructive algorithm for architectural and functional adaptation of artificial neural networks.

    Science.gov (United States)

    Islam, Md Monirul; Sattar, Md Abdus; Amin, Md Faijul; Yao, Xin; Murase, Kazuyuki

    2009-12-01

    The generalization ability of artificial neural networks (ANNs) is greatly dependent on their architectures. Constructive algorithms provide an attractive automatic way of determining a near-optimal ANN architecture for a given problem. Several such algorithms have been proposed in the literature and shown their effectiveness. This paper presents a new constructive algorithm (NCA) in automatically determining ANN architectures. Unlike most previous studies on determining ANN architectures, NCA puts emphasis on architectural adaptation and functional adaptation in its architecture determination process. It uses a constructive approach to determine the number of hidden layers in an ANN and of neurons in each hidden layer. To achieve functional adaptation, NCA trains hidden neurons in the ANN by using different training sets that were created by employing a similar concept used in the boosting algorithm. The purpose of using different training sets is to encourage hidden neurons to learn different parts or aspects of the training data so that the ANN can learn the whole training data in a better way. In this paper, the convergence and computational issues of NCA are analytically studied. The computational complexity of NCA is found to be O(W xP(t) xtau), where W is the number of weights in the ANN, P(t) is the number of training examples, and tau is the number of training epochs. This complexity has the same order as what the backpropagation learning algorithm requires for training a fixed ANN architecture. A set of eight classification and two approximation benchmark problems was used to evaluate the performance of NCA. The experimental results show that NCA can produce ANN architectures with fewer hidden neurons and better generalization ability compared to existing constructive and nonconstructive algorithms.

  4. A Comparison Study of Machine Learning Based Algorithms for Fatigue Crack Growth Calculation

    Directory of Open Access Journals (Sweden)

    Hongxun Wang

    2017-05-01

    Full Text Available The relationships between the fatigue crack growth rate ( d a / d N and stress intensity factor range ( Δ K are not always linear even in the Paris region. The stress ratio effects on fatigue crack growth rate are diverse in different materials. However, most existing fatigue crack growth models cannot handle these nonlinearities appropriately. The machine learning method provides a flexible approach to the modeling of fatigue crack growth because of its excellent nonlinear approximation and multivariable learning ability. In this paper, a fatigue crack growth calculation method is proposed based on three different machine learning algorithms (MLAs: extreme learning machine (ELM, radial basis function network (RBFN and genetic algorithms optimized back propagation network (GABP. The MLA based method is validated using testing data of different materials. The three MLAs are compared with each other as well as the classical two-parameter model ( K * approach. The results show that the predictions of MLAs are superior to those of K * approach in accuracy and effectiveness, and the ELM based algorithms show overall the best agreement with the experimental data out of the three MLAs, for its global optimization and extrapolation ability.

  5. A Comparison Study of Machine Learning Based Algorithms for Fatigue Crack Growth Calculation.

    Science.gov (United States)

    Wang, Hongxun; Zhang, Weifang; Sun, Fuqiang; Zhang, Wei

    2017-05-18

    The relationships between the fatigue crack growth rate ( d a / d N ) and stress intensity factor range ( Δ K ) are not always linear even in the Paris region. The stress ratio effects on fatigue crack growth rate are diverse in different materials. However, most existing fatigue crack growth models cannot handle these nonlinearities appropriately. The machine learning method provides a flexible approach to the modeling of fatigue crack growth because of its excellent nonlinear approximation and multivariable learning ability. In this paper, a fatigue crack growth calculation method is proposed based on three different machine learning algorithms (MLAs): extreme learning machine (ELM), radial basis function network (RBFN) and genetic algorithms optimized back propagation network (GABP). The MLA based method is validated using testing data of different materials. The three MLAs are compared with each other as well as the classical two-parameter model ( K * approach). The results show that the predictions of MLAs are superior to those of K * approach in accuracy and effectiveness, and the ELM based algorithms show overall the best agreement with the experimental data out of the three MLAs, for its global optimization and extrapolation ability.

  6. A Comparison Study of Machine Learning Based Algorithms for Fatigue Crack Growth Calculation

    Science.gov (United States)

    Wang, Hongxun; Zhang, Weifang; Sun, Fuqiang; Zhang, Wei

    2017-01-01

    The relationships between the fatigue crack growth rate (da/dN) and stress intensity factor range (ΔK) are not always linear even in the Paris region. The stress ratio effects on fatigue crack growth rate are diverse in different materials. However, most existing fatigue crack growth models cannot handle these nonlinearities appropriately. The machine learning method provides a flexible approach to the modeling of fatigue crack growth because of its excellent nonlinear approximation and multivariable learning ability. In this paper, a fatigue crack growth calculation method is proposed based on three different machine learning algorithms (MLAs): extreme learning machine (ELM), radial basis function network (RBFN) and genetic algorithms optimized back propagation network (GABP). The MLA based method is validated using testing data of different materials. The three MLAs are compared with each other as well as the classical two-parameter model (K* approach). The results show that the predictions of MLAs are superior to those of K* approach in accuracy and effectiveness, and the ELM based algorithms show overall the best agreement with the experimental data out of the three MLAs, for its global optimization and extrapolation ability. PMID:28772906

  7. Comparison of evolutionary algorithms in gene regulatory network model inference.

    LENUS (Irish Health Repository)

    2010-01-01

    ABSTRACT: BACKGROUND: The evolution of high throughput technologies that measure gene expression levels has created a data base for inferring GRNs (a process also known as reverse engineering of GRNs). However, the nature of these data has made this process very difficult. At the moment, several methods of discovering qualitative causal relationships between genes with high accuracy from microarray data exist, but large scale quantitative analysis on real biological datasets cannot be performed, to date, as existing approaches are not suitable for real microarray data which are noisy and insufficient. RESULTS: This paper performs an analysis of several existing evolutionary algorithms for quantitative gene regulatory network modelling. The aim is to present the techniques used and offer a comprehensive comparison of approaches, under a common framework. Algorithms are applied to both synthetic and real gene expression data from DNA microarrays, and ability to reproduce biological behaviour, scalability and robustness to noise are assessed and compared. CONCLUSIONS: Presented is a comparison framework for assessment of evolutionary algorithms, used to infer gene regulatory networks. Promising methods are identified and a platform for development of appropriate model formalisms is established.

  8. An algorithm J-SC of detecting communities in complex networks

    Science.gov (United States)

    Hu, Fang; Wang, Mingzhu; Wang, Yanran; Hong, Zhehao; Zhu, Yanhui

    2017-11-01

    Currently, community detection in complex networks has become a hot-button topic. In this paper, based on the Spectral Clustering (SC) algorithm, we introduce the idea of Jacobi iteration, and then propose a novel algorithm J-SC for community detection in complex networks. Furthermore, the accuracy and efficiency of this algorithm are tested by some representative real-world networks and several computer-generated networks. The experimental results indicate that the J-SC algorithm can accurately and effectively detect the community structure in these networks. Meanwhile, compared with the state-of-the-art community detecting algorithms SC, SOM, K-means, Walktrap and Fastgreedy, the J-SC algorithm has better performance, reflecting that this new algorithm can acquire higher values of modularity and NMI. Moreover, this new algorithm has faster running time than SOM and Walktrap algorithms.

  9. Radial basis function neural networks with sequential learning MRAN and its applications

    CERN Document Server

    Sundararajan, N; Wei Lu Ying

    1999-01-01

    This book presents in detail the newly developed sequential learning algorithm for radial basis function neural networks, which realizes a minimal network. This algorithm, created by the authors, is referred to as Minimal Resource Allocation Networks (MRAN). The book describes the application of MRAN in different areas, including pattern recognition, time series prediction, system identification, control, communication and signal processing. Benchmark problems from these areas have been studied, and MRAN is compared with other algorithms. In order to make the book self-contained, a review of t

  10. Learning and optimization with cascaded VLSI neural network building-block chips

    Science.gov (United States)

    Duong, T.; Eberhardt, S. P.; Tran, M.; Daud, T.; Thakoor, A. P.

    1992-01-01

    To demonstrate the versatility of the building-block approach, two neural network applications were implemented on cascaded analog VLSI chips. Weights were implemented using 7-b multiplying digital-to-analog converter (MDAC) synapse circuits, with 31 x 32 and 32 x 32 synapses per chip. A novel learning algorithm compatible with analog VLSI was applied to the two-input parity problem. The algorithm combines dynamically evolving architecture with limited gradient-descent backpropagation for efficient and versatile supervised learning. To implement the learning algorithm in hardware, synapse circuits were paralleled for additional quantization levels. The hardware-in-the-loop learning system allocated 2-5 hidden neurons for parity problems. Also, a 7 x 7 assignment problem was mapped onto a cascaded 64-neuron fully connected feedback network. In 100 randomly selected problems, the network found optimal or good solutions in most cases, with settling times in the range of 7-100 microseconds.

  11. A new dynamical layout algorithm for complex biochemical reaction networks.

    Science.gov (United States)

    Wegner, Katja; Kummer, Ursula

    2005-08-26

    To study complex biochemical reaction networks in living cells researchers more and more rely on databases and computational methods. In order to facilitate computational approaches, visualisation techniques are highly important. Biochemical reaction networks, e.g. metabolic pathways are often depicted as graphs and these graphs should be drawn dynamically to provide flexibility in the context of different data. Conventional layout algorithms are not sufficient for every kind of pathway in biochemical research. This is mainly due to certain conventions to which biochemists/biologists are used to and which are not in accordance to conventional layout algorithms. A number of approaches has been developed to improve this situation. Some of these are used in the context of biochemical databases and make more or less use of the information in these databases to aid the layout process. However, visualisation becomes also more and more important in modelling and simulation tools which mostly do not offer additional connections to databases. Therefore, layout algorithms used in these tools have to work independently of any databases. In addition, all of the existing algorithms face some limitations with respect to the number of edge crossings when it comes to larger biochemical systems due to the interconnectivity of these. Last but not least, in some cases, biochemical conventions are not met properly. Out of these reasons we have developed a new algorithm which tackles these problems by reducing the number of edge crossings in complex systems, taking further biological conventions into account to identify and visualise cycles. Furthermore the algorithm is independent from database information in order to be easily adopted in any application. It can also be tested as part of the SimWiz package (free to download for academic users at 1). The new algorithm reduces the complexity of pathways, as well as edge crossings and edge length in the resulting graphical representation

  12. Interactive Algorithms for Unsupervised Machine Learning

    Science.gov (United States)

    2015-06-01

    of success of SVT versus rescaled sampling probability np/ log(n) with r = 5, µ0 = 1. (d): Probability of success of Algorithm 1 and SVT versus...0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Fraction of Samples/Column (p) P ro b a b ili ty o f R e c o v e ry Adapt mu=1 Adapt mu=2 SVT mu=1 SVT mu...plotted against rescaled sampling probability p/µ0. (c): Probability of success of SVT versus rescaled sampling probability np/ log(n) with r = 5, µ0

  13. A Flexible Reservation Algorithm for Advance Network Provisioning

    Energy Technology Data Exchange (ETDEWEB)

    Balman, Mehmet; Chaniotakis, Evangelos; Shoshani, Arie; Sim, Alex

    2010-04-12

    Many scientific applications need support from a communication infrastructure that provides predictable performance, which requires effective algorithms for bandwidth reservations. Network reservation systems such as ESnet's OSCARS, establish guaranteed bandwidth of secure virtual circuits for a certain bandwidth and length of time. However, users currently cannot inquire about bandwidth availability, nor have alternative suggestions when reservation requests fail. In general, the number of reservation options is exponential with the number of nodes n, and current reservation commitments. We present a novel approach for path finding in time-dependent networks taking advantage of user-provided parameters of total volume and time constraints, which produces options for earliest completion and shortest duration. The theoretical complexity is only O(n2r2) in the worst-case, where r is the number of reservations in the desired time interval. We have implemented our algorithm and developed efficient methodologies for incorporation into network reservation frameworks. Performance measurements confirm the theoretical predictions.

  14. Outsmarting neural networks: an alternative paradigm for machine learning

    Energy Technology Data Exchange (ETDEWEB)

    Protopopescu, V.; Rao, N.S.V.

    1996-10-01

    We address three problems in machine learning, namely: (i) function learning, (ii) regression estimation, and (iii) sensor fusion, in the Probably and Approximately Correct (PAC) framework. We show that, under certain conditions, one can reduce the three problems above to the regression estimation. The latter is usually tackled with artificial neural networks (ANNs) that satisfy the PAC criteria, but have high computational complexity. We propose several computationally efficient PAC alternatives to ANNs to solve the regression estimation. Thereby we also provide efficient PAC solutions to the function learning and sensor fusion problems. The approach is based on cross-fertilizing concepts and methods from statistical estimation, nonlinear algorithms, and the theory of computational complexity, and is designed as part of a new, coherent paradigm for machine learning.

  15. Are deep neural networks really learning relevant features?

    DEFF Research Database (Denmark)

    Kereliuk, Corey; Sturm, Bob L.; Larsen, Jan

    In recent years deep neural networks (DNNs) have become a popular choice for audio content analysis. This may be attributed to various factors including advancements in training algorithms, computational power, and the potential for DNNs to implicitly learn a set of feature detectors. We have...... recently re-examined two works \\cite{sigtiaimproved}\\cite{hamel2010learning} that consider DNNs for the task of music genre recognition (MGR). These papers conclude that frame-level features learned by DNNs offer an improvement over traditional, hand-crafted features such as Mel-frequency cepstrum...... leads one to question the degree to which the learned frame-level features are actually useful for MGR. We make available a reproducible software package allowing other researchers to completely duplicate our figures and results....

  16. Adaptive Learning in Weighted Network Games

    NARCIS (Netherlands)

    Bayer, Péter; Herings, P. Jean-Jacques; Peeters, Ronald; Thuijsman, Frank

    2017-01-01

    This paper studies adaptive learning in the class of weighted network games. This class of games includes applications like research and development within interlinked firms, crime within social networks, the economics of pollution, and defense expenditures within allied nations. We show that for

  17. Novel Spectrum Sensing Algorithms for OFDM Cognitive Radio Networks.

    Science.gov (United States)

    Shi, Zhenguo; Wu, Zhilu; Yin, Zhendong; Cheng, Qingqing

    2015-06-15

    Spectrum sensing technology plays an increasingly important role in cognitive radio networks. Consequently, several spectrum sensing algorithms have been proposed in the literature. In this paper, we present a new spectrum sensing algorithm "Differential Characteristics-Based OFDM (DC-OFDM)" for detecting OFDM signal on account of differential characteristics. We put the primary value on channel gain θ around zero to detect the presence of primary user. Furthermore, utilizing the same method of differential operation, we improve two traditional OFDM sensing algorithms (cyclic prefix and pilot tones detecting algorithms), and propose a "Differential Characteristics-Based Cyclic Prefix (DC-CP)" detector and a "Differential Characteristics-Based Pilot Tones (DC-PT)" detector, respectively. DC-CP detector is based on auto-correlation vector to sense the spectrum, while the DC-PT detector takes the frequency-domain cross-correlation of PT as the test statistic to detect the primary user. Moreover, the distributions of the test statistics of the three proposed methods have been derived. Simulation results illustrate that all of the three proposed methods can achieve good performance under low signal to noise ratio (SNR) with the presence of timing delay. Specifically, the DC-OFDM detector gets the best performance among the presented detectors. Moreover, both of the DC-CP and DC-PT detector achieve significant improvements compared with their corresponding original detectors.

  18. A new algorithm for $H\\rightarrow\\tau\\bar{\\tau}$ invariant mass reconstruction using Deep Neural Networks

    CERN Document Server

    Dietrich, Felix

    2017-01-01

    Reconstructing the invariant mass in a Higgs boson decay event containing tau leptons turns out to be a challenging endeavour. The aim of this summer student project is to implement a new algorithm for this task, using deep neural networks and machine learning. The results are compared to SVFit, an existing algorithm that uses dynamical likelihood techniques. A neural network is found that reaches the accuracy of SVFit at low masses and even surpasses it at higher masses, while at the same time providing results a thousand times faster.

  19. Electronic Social Networks, Teaching, and Learning

    Science.gov (United States)

    Pidduck, Anne Banks

    2010-01-01

    This paper explores the relationship between electronic social networks, teaching, and learning. Previous studies have shown a strong positive correlation between student engagement and learning. By extending this work to engage instructors and add an electronic component, our study shows possible teaching improvement as well. In particular,…

  20. Realizing Wisdom Theory in Complex Learning Networks

    Science.gov (United States)

    Kok, Ayse

    2009-01-01

    The word "wisdom" is rarely seen in contemporary technology and learning discourse. This conceptual paper aims to provide some clear principles that answer the question: How can we establish wisdom in complex learning networks? By considering the nature of contemporary calls for wisdom the paper provides a metatheoretial framework to evaluate the…

  1. Machine Learning Algorithms for $b$-Jet Tagging at the ATLAS Experiment

    CERN Document Server

    Paganini, Michela; The ATLAS collaboration

    2017-01-01

    The separation of b-quark initiated jets from those coming from lighter quark flavours (b-tagging) is a fundamental tool for the ATLAS physics program at the CERN Large Hadron Collider. The most powerful b-tagging algorithms combine information from low-level taggers exploiting reconstructed track and vertex information using a multivariate classifier. The potential of modern Machine Learning techniques such as Recurrent Neural Networks and Deep Learning is explored using simulated events, and compared to that achievable from more traditional classifiers such as boosted decision trees.

  2. Methods of information theory and algorithmic complexity for network biology.

    Science.gov (United States)

    Zenil, Hector; Kiani, Narsis A; Tegnér, Jesper

    2016-03-01

    We survey and introduce concepts and tools located at the intersection of information theory and network biology. We show that Shannon's information entropy, compressibility and algorithmic complexity quantify different local and global aspects of synthetic and biological data. We show examples such as the emergence of giant components in Erdös-Rényi random graphs, and the recovery of topological properties from numerical kinetic properties simulating gene expression data. We provide exact theoretical calculations, numerical approximations and error estimations of entropy, algorithmic probability and Kolmogorov complexity for different types of graphs, characterizing their variant and invariant properties. We introduce formal definitions of complexity for both labeled and unlabeled graphs and prove that the Kolmogorov complexity of a labeled graph is a good approximation of its unlabeled Kolmogorov complexity and thus a robust definition of graph complexity. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Using of coevolutionary algorithm on P2P networks

    Directory of Open Access Journals (Sweden)

    Rezaee Alireza

    2009-01-01

    Full Text Available Multicast routing is the basic demand to provide QOS (Quality of service in multimedia streaming on peer to peer networks. Making multicast trees optimizing their delay cost and considering nodal and links limited bandwidth (load balance constraints is a NP-hard (Nondeterministic Polynomial time hard problem. In this paper we have used Co-evolutionary Algorithm to make multicast trees with optimized average delay from source to the clients considering the limited capacity of nodes and links in application layer. The numeric results obtained are shown that the costs have been much improved comparing with other existent non-GA (Genetic Algorithm approaches. Also we have used only a portion of every nodal outage degree and this has improved the results comparing to use of the entire outage degree.

  4. The Forward-Reverse Algorithm for Stochastic Reaction Networks

    KAUST Repository

    Bayer, Christian

    2015-01-07

    In this work, we present an extension of the forward-reverse algorithm by Bayer and Schoenmakers [2] to the context of stochastic reaction networks (SRNs). We then apply this bridge-generation technique to the statistical inference problem of approximating the reaction coefficients based on discretely observed data. To this end, we introduce a two-phase iterative inference method in which we solve a set of deterministic optimization problems where the SRNs are replaced by the classical ODE rates; then, during the second phase, the Monte Carlo version of the EM algorithm is applied starting from the output of the previous phase. Starting from a set of over-dispersed seeds, the output of our two-phase method is a cluster of maximum likelihood estimates obtained by using convergence assessment techniques from the theory of Markov chain Monte Carlo.

  5. Neural network implementations of data association algorithms for sensor fusion

    Science.gov (United States)

    Brown, Donald E.; Pittard, Clarence L.; Martin, Worthy N.

    1989-01-01

    The paper is concerned with locating a time varying set of entities in a fixed field when the entities are sensed at discrete time instances. At a given time instant a collection of bivariate Gaussian sensor reports is produced, and these reports estimate the location of a subset of the entities present in the field. A database of reports is maintained, which ideally should contain one report for each entity sensed. Whenever a collection of sensor reports is received, the database must be updated to reflect the new information. This updating requires association processing between the database reports and the new sensor reports to determine which pairs of sensor and database reports correspond to the same entity. Algorithms for performing this association processing are presented. Neural network implementation of the algorithms, along with simulation results comparing the approaches are provided.

  6. NASA Engineering Network Lessons Learned

    Data.gov (United States)

    National Aeronautics and Space Administration — The NASA Lessons Learned system provides access to official, reviewed lessons learned from NASA programs and projects. These lessons have been made available to the...

  7. Optimizing Cellular Networks Enabled with Renewal Energy via Strategic Learning.

    Science.gov (United States)

    Sohn, Insoo; Liu, Huaping; Ansari, Nirwan

    2015-01-01

    An important issue in the cellular industry is the rising energy cost and carbon footprint due to the rapid expansion of the cellular infrastructure. Greening cellular networks has thus attracted attention. Among the promising green cellular network techniques, the renewable energy-powered cellular network has drawn increasing attention as a critical element towards reducing carbon emissions due to massive energy consumption in the base stations deployed in cellular networks. Game theory is a branch of mathematics that is used to evaluate and optimize systems with multiple players with conflicting objectives and has been successfully used to solve various problems in cellular networks. In this paper, we model the green energy utilization and power consumption optimization problem of a green cellular network as a pilot power selection strategic game and propose a novel distributed algorithm based on a strategic learning method. The simulation results indicate that the proposed algorithm achieves correlated equilibrium of the pilot power selection game, resulting in optimum green energy utilization and power consumption reduction.

  8. Managing and learning with multiple models: Objectives and optimization algorithms

    Science.gov (United States)

    Probert, William J. M.; Hauser, C.E.; McDonald-Madden, E.; Runge, M.C.; Baxter, P.W.J.; Possingham, H.P.

    2011-01-01

    The quality of environmental decisions should be gauged according to managers' objectives. Management objectives generally seek to maximize quantifiable measures of system benefit, for instance population growth rate. Reaching these goals often requires a certain degree of learning about the system. Learning can occur by using management action in combination with a monitoring system. Furthermore, actions can be chosen strategically to obtain specific kinds of information. Formal decision making tools can choose actions to favor such learning in two ways: implicitly via the optimization algorithm that is used when there is a management objective (for instance, when using adaptive management), or explicitly by quantifying knowledge and using it as the fundamental project objective, an approach new to conservation.This paper outlines three conservation project objectives - a pure management objective, a pure learning objective, and an objective that is a weighted mixture of these two. We use eight optimization algorithms to choose actions that meet project objectives and illustrate them in a simulated conservation project. The algorithms provide a taxonomy of decision making tools in conservation management when there is uncertainty surrounding competing models of system function. The algorithms build upon each other such that their differences are highlighted and practitioners may see where their decision making tools can be improved. ?? 2010 Elsevier Ltd.

  9. ISTA-Net: Iterative Shrinkage-Thresholding Algorithm Inspired Deep Network for Image Compressive Sensing

    KAUST Repository

    Zhang, Jian

    2017-06-24

    Traditional methods for image compressive sensing (CS) reconstruction solve a well-defined inverse problem that is based on a predefined CS model, which defines the underlying structure of the problem and is generally solved by employing convergent iterative solvers. These optimization-based CS methods face the challenge of choosing optimal transforms and tuning parameters in their solvers, while also suffering from high computational complexity in most cases. Recently, some deep network based CS algorithms have been proposed to improve CS reconstruction performance, while dramatically reducing time complexity as compared to optimization-based methods. Despite their impressive results, the proposed networks (either with fully-connected or repetitive convolutional layers) lack any structural diversity and they are trained as a black box, void of any insights from the CS domain. In this paper, we combine the merits of both types of CS methods: the structure insights of optimization-based method and the performance/speed of network-based ones. We propose a novel structured deep network, dubbed ISTA-Net, which is inspired by the Iterative Shrinkage-Thresholding Algorithm (ISTA) for optimizing a general $l_1$ norm CS reconstruction model. ISTA-Net essentially implements a truncated form of ISTA, where all ISTA-Net parameters are learned end-to-end to minimize a reconstruction error in training. Borrowing more insights from the optimization realm, we propose an accelerated version of ISTA-Net, dubbed FISTA-Net, which is inspired by the fast iterative shrinkage-thresholding algorithm (FISTA). Interestingly, this acceleration naturally leads to skip connections in the underlying network design. Extensive CS experiments demonstrate that the proposed ISTA-Net and FISTA-Net outperform existing optimization-based and network-based CS methods by large margins, while maintaining a fast runtime.

  10. MIRA: mutual information-based reporter algorithm for metabolic networks.

    Science.gov (United States)

    Cicek, A Ercument; Roeder, Kathryn; Ozsoyoglu, Gultekin

    2014-06-15

    Discovering the transcriptional regulatory architecture of the metabolism has been an important topic to understand the implications of transcriptional fluctuations on metabolism. The reporter algorithm (RA) was proposed to determine the hot spots in metabolic networks, around which transcriptional regulation is focused owing to a disease or a genetic perturbation. Using a z-score-based scoring scheme, RA calculates the average statistical change in the expression levels of genes that are neighbors to a target metabolite in the metabolic network. The RA approach has been used in numerous studies to analyze cellular responses to the downstream genetic changes. In this article, we propose a mutual information-based multivariate reporter algorithm (MIRA) with the goal of eliminating the following problems in detecting reporter metabolites: (i) conventional statistical methods suffer from small sample sizes, (ii) as z-score ranges from minus to plus infinity, calculating average scores can lead to canceling out opposite effects and (iii) analyzing genes one by one, then aggregating results can lead to information loss. MIRA is a multivariate and combinatorial algorithm that calculates the aggregate transcriptional response around a metabolite using mutual information. We show that MIRA's results are biologically sound, empirically significant and more reliable than RA. We apply MIRA to gene expression analysis of six knockout strains of Escherichia coli and show that MIRA captures the underlying metabolic dynamics of the switch from aerobic to anaerobic respiration. We also apply MIRA to an Autism Spectrum Disorder gene expression dataset. Results indicate that MIRA reports metabolites that highly overlap with recently found metabolic biomarkers in the autism literature. Overall, MIRA is a promising algorithm for detecting metabolic drug targets and understanding the relation between gene expression and metabolic activity. The code is implemented in C# language using

  11. Optimization of neural network algorithm of the land market description

    Directory of Open Access Journals (Sweden)

    M. A. Karpovich

    2016-01-01

    Full Text Available The advantages of neural network technology is shown in comparison of traditional descriptions of dynamically changing systems, which include a modern land market. The basic difficulty arising in the practical implementation of neural network models of the land market and construction products is revealed It is the formation of a representative set of training and test examples. The requirements which are necessary for the correct description of the current economic situation has been determined, it consists in the fact that Train-paid-set in the feature space should not has the ranges with a low density of observations. The methods of optimization of empirical array, which allow to avoid the long-range extrapolation of data from range of concentration of the set of examples are formulated. It is shown that a radical method of optimization a set of training and test examples enclosing to collect supplemantary information, is associated with significant costs time and resources for the economic problems and the ratio of cost / efficiency is less efficient than an algorithm optimization neural network models the earth market fixed set of empirical data. Algorithm of optimization based on the transformation of arrays of information which represents the expansion of the ranges of concentration of the set of examples and compression the ranges of low density of observations is analyzed in details. The significant reduction in the relative error of land price description is demonstrated on the specific example of Voronezh region market of lands which intend for road construction, it makes the using of radical method of empirical optimization of the array costeffective with accounting the significant absolute value of the land. The high economic efficiency of the proposed algorithms is demonstrated.

  12. A Hybrid Spectral Clustering and Deep Neural Network Ensemble Algorithm for Intrusion Detection in Sensor Networks

    Directory of Open Access Journals (Sweden)

    Tao Ma

    2016-10-01

    Full Text Available The development of intrusion detection systems (IDS that are adapted to allow routers and network defence systems to detect malicious network traffic disguised as network protocols or normal access is a critical challenge. This paper proposes a novel approach called SCDNN, which combines spectral clustering (SC and deep neural network (DNN algorithms. First, the dataset is divided into k subsets based on sample similarity using cluster centres, as in SC. Next, the distance between data points in a testing set and the training set is measured based on similarity features and is fed into the deep neural network algorithm for intrusion detection. Six KDD-Cup99 and NSL-KDD datasets and a sensor network dataset were employed to test the performance of the model. These experimental results indicate that the SCDNN classifier not only performs better than backpropagation neural network (BPNN, support vector machine (SVM, random forest (RF and Bayes tree models in detection accuracy and the types of abnormal attacks found. It also provides an effective tool of study and analysis of intrusion detection in large networks.

  13. A Hybrid Spectral Clustering and Deep Neural Network Ensemble Algorithm for Intrusion Detection in Sensor Networks.

    Science.gov (United States)

    Ma, Tao; Wang, Fen; Cheng, Jianjun; Yu, Yang; Chen, Xiaoyun

    2016-10-13

    The development of intrusion detection systems (IDS) that are adapted to allow routers and network defence systems to detect malicious network traffic disguised as network protocols or normal access is a critical challenge. This paper proposes a novel approach called SCDNN, which combines spectral clustering (SC) and deep neural network (DNN) algorithms. First, the dataset is divided into k subsets based on sample similarity using cluster centres, as in SC. Next, the distance between data points in a testing set and the training set is measured based on similarity features and is fed into the deep neural network algorithm for intrusion detection. Six KDD-Cup99 and NSL-KDD datasets and a sensor network dataset were employed to test the performance of the model. These experimental results indicate that the SCDNN classifier not only performs better than backpropagation neural network (BPNN), support vector machine (SVM), random forest (RF) and Bayes tree models in detection accuracy and the types of abnormal attacks found. It also provides an effective tool of study and analysis of intrusion detection in large networks.

  14. On Evaluating Power Loss with HATSGA Algorithm for Power Network Reconfiguration in the Smart Grid

    OpenAIRE

    Calhau, Flavio Galvão; Pezzutti, Alysson; Martins, Joberto S. B.

    2017-01-01

    This paper presents the power network reconfig-uration algorithm HATSGA with a " R " modeling approach and evaluates its behavior in computing new reconfiguration topologies for the power network in the Smart Grid context. The modelling of the power distribution network with the language " R " is used to represent the network and support computation of distinct algorithm configurations towards the evaluation of new reconfiguration topologies. The HATSGA algorithm adopts hybrid Tabu Search and...

  15. BOUNDARY DETECTION ALGORITHMS IN WIRELESS SENSOR NETWORKS: A SURVEY

    Directory of Open Access Journals (Sweden)

    Lanny Sitanayah

    2009-01-01

    Full Text Available Wireless sensor networks (WSNs comprise a large number of sensor nodes, which are spread out within a region and communicate using wireless links. In some WSN applications, recognizing boundary nodes is important for topology discovery, geographic routing and tracking. In this paper, we study the problem of recognizing the boundary nodes of a WSN. We firstly identify the factors that influence the design of algorithms for boundary detection. Then, we classify the existing work in boundary detection, which is vital for target tracking to detect when the targets enter or leave the sensor field.

  16. The production route selection algorithm in virtual manufacturing networks

    Science.gov (United States)

    Krenczyk, D.; Skolud, B.; Olender, M.

    2017-08-01

    The increasing requirements and competition in the global market are challenges for the companies profitability in production and supply chain management. This situation became the basis for construction of virtual organizations, which are created in response to temporary needs. The problem of the production flow planning in virtual manufacturing networks is considered. In the paper the algorithm of the production route selection from the set of admissible routes, which meets the technology and resource requirements and in the context of the criterion of minimum cost is proposed.

  17. Finite-Size Geometric Entanglement from Tensor Network Algorithms

    OpenAIRE

    Shi, Qian-Qian; Orus, Roman; Fjaerestad, John Ove; Zhou, Huan-Qiang

    2009-01-01

    The global geometric entanglement is studied in the context of newly-developed tensor network algorithms for finite systems. For one-dimensional quantum spin systems it is found that, at criticality, the leading finite-size correction to the global geometric entanglement per site behaves as $b/n$, where $n$ is the size of the system and $b$ a given coefficient. Our conclusion is based on the computation of the geometric entanglement per spin for the quantum Ising model in a transverse magneti...

  18. Reverse engineering highlights potential principles of large gene regulatory network design and learning.

    Science.gov (United States)

    Carré, Clément; Mas, André; Krouk, Gabriel

    2017-01-01

    Inferring transcriptional gene regulatory networks from transcriptomic datasets is a key challenge of systems biology, with potential impacts ranging from medicine to agronomy. There are several techniques used presently to experimentally assay transcription factors to target relationships, defining important information about real gene regulatory networks connections. These techniques include classical ChIP-seq, yeast one-hybrid, or more recently, DAP-seq or target technologies. These techniques are usually used to validate algorithm predictions. Here, we developed a reverse engineering approach based on mathematical and computer simulation to evaluate the impact that this prior knowledge on gene regulatory networks may have on training machine learning algorithms. First, we developed a gene regulatory networks-simulating engine called FRANK (Fast Randomizing Algorithm for Network Knowledge) that is able to simulate large gene regulatory networks (containing 10(4) genes) with characteristics of gene regulatory networks observed in vivo. FRANK also generates stable or oscillatory gene expression directly produced by the simulated gene regulatory networks. The development of FRANK leads to important general conclusions concerning the design of large and stable gene regulatory networks harboring scale free properties (built ex nihilo). In combination with supervised (accepting prior knowledge) support vector machine algorithm we (i) address biologically oriented questions concerning our capacity to accurately reconstruct gene regulatory networks and in particular we demonstrate that prior-knowledge structure is crucial for accurate learning, and (ii) draw conclusions to inform experimental design to performed learning able to solve gene regulatory networks in the future. By demonstrating that our predictions concerning the influence of the prior-knowledge structure on support vector machine learning capacity holds true on real data (Escherichia coli K14 network

  19. Structure Learning for Deep Neural Networks Based on Multiobjective Optimization.

    Science.gov (United States)

    Liu, Jia; Gong, Maoguo; Miao, Qiguang; Wang, Xiaogang; Li, Hao

    2017-05-05

    This paper focuses on the connecting structure of deep neural networks and proposes a layerwise structure learning method based on multiobjective optimization. A model with better generalization can be obtained by reducing the connecting parameters in deep networks. The aim is to find the optimal structure with high representation ability and better generalization for each layer. Then, the visible data are modeled with respect to structure based on the products of experts. In order to mitigate the difficulty of estimating the denominator in PoE, the denominator is simplified and taken as another objective, i.e., the connecting sparsity. Moreover, for the consideration of the contradictory nature between the representation ability and the network connecting sparsity, the multiobjective model is established. An improved multiobjective evolutionary algorithm is used to solve this model. Two tricks are designed to decrease the computational cost according to the properties of input data. The experiments on single-layer level, hierarchical level, and application level demonstrate the effectiveness of the proposed algorithm, and the learned structures can improve the performance of deep neural networks.

  20. Learning Algorithm for a Brachiating Robot

    Directory of Open Access Journals (Sweden)

    Hideki Kajima

    2003-01-01

    Full Text Available This paper introduces a new concept of multi-locomotion robot inspired by an animal. The robot, ‘Gorilla Robot II’, can select the appropriate locomotion (from biped locomotion, quadruped locomotion and brachiation according to an environment or task. We consider ‘brachiation’ to be one of the most dynamic of animal motions. To develop a brachiation controller, architecture of the hierarchical behaviour-based controller, which consists of behaviour controllers and behaviour coordinators, was used. To achieve better brachiation, an enhanced learning method for motion control, adjusting the timing of the behaviour coordination, is proposed. Finally, it is shown that the developed robot successfully performs two types of brachiation and continuous locomotion.

  1. Learning sorting algorithms through visualization construction

    Science.gov (United States)

    Cetin, Ibrahim; Andrews-Larson, Christine

    2016-01-01

    Recent increased interest in computational thinking poses an important question to researchers: What are the best ways to teach fundamental computing concepts to students? Visualization is suggested as one way of supporting student learning. This mixed-method study aimed to (i) examine the effect of instruction in which students constructed visualizations on students' programming achievement and students' attitudes toward computer programming, and (ii) explore how this kind of instruction supports students' learning according to their self-reported experiences in the course. The study was conducted with 58 pre-service teachers who were enrolled in their second programming class. They expect to teach information technology and computing-related courses at the primary and secondary levels. An embedded experimental model was utilized as a research design. Students in the experimental group were given instruction that required students to construct visualizations related to sorting, whereas students in the control group viewed pre-made visualizations. After the instructional intervention, eight students from each group were selected for semi-structured interviews. The results showed that the intervention based on visualization construction resulted in significantly better acquisition of sorting concepts. However, there was no significant difference between the groups with respect to students' attitudes toward computer programming. Qualitative data analysis indicated that students in the experimental group constructed necessary abstractions through their engagement in visualization construction activities. The authors of this study argue that the students' active engagement in the visualization construction activities explains only one side of students' success. The other side can be explained through the instructional approach, constructionism in this case, used to design instruction. The conclusions and implications of this study can be used by researchers and

  2. Robustness and Optimization of Complex Networks : Reconstructability, Algorithms and Modeling

    NARCIS (Netherlands)

    Liu, D.

    2013-01-01

    The infrastructure networks, including the Internet, telecommunication networks, electrical power grids, transportation networks (road, railway, waterway, and airway networks), gas networks and water networks, are becoming more and more complex. The complex infrastructure networks are crucial to our

  3. Generalized SMO algorithm for SVM-based multitask learning.

    Science.gov (United States)

    Cai, Feng; Cherkassky, Vladimir

    2012-06-01

    Exploiting additional information to improve traditional inductive learning is an active research area in machine learning. In many supervised-learning applications, training data can be naturally separated into several groups, and incorporating this group information into learning may improve generalization. Recently, Vapnik proposed a general approach to formalizing such problems, known as "learning with structured data" and its support vector machine (SVM) based optimization formulation called SVM+. Liang and Cherkassky showed the connection between SVM+ and multitask learning (MTL) approaches in machine learning, and proposed an SVM-based formulation for MTL called SVM+MTL for classification. Training the SVM+MTL classifier requires the solution of a large quadratic programming optimization problem which scales as O(n(3)) with sample size n. So there is a need to develop computationally efficient algorithms for implementing SVM+MTL. This brief generalizes Platt's sequential minimal optimization (SMO) algorithm to the SVM+MTL setting. Empirical results show that, for typical SVM+MTL problems, the proposed generalized SMO achieves over 100 times speed-up, in comparison with general-purpose optimization routines.

  4. Evaluation of clustering algorithms for protein-protein interaction networks

    Directory of Open Access Journals (Sweden)

    van Helden Jacques

    2006-11-01

    Full Text Available Abstract Background Protein interactions are crucial components of all cellular processes. Recently, high-throughput methods have been developed to obtain a global description of the interactome (the whole network of protein interactions for a given organism. In 2002, the yeast interactome was estimated to contain up to 80,000 potential interactions. This estimate is based on the integration of data sets obtained by various methods (mass spectrometry, two-hybrid methods, genetic studies. High-throughput methods are known, however, to yield a non-negligible rate of false positives, and to miss a fraction of existing interactions. The interactome can be represented as a graph where nodes correspond with proteins and edges with pairwise interactions. In recent years clustering methods have been developed and applied in order to extract relevant modules from such graphs. These algorithms require the specification of parameters that may drastically affect the results. In this paper we present a comparative assessment of four algorithms: Markov Clustering (MCL, Restricted Neighborhood Search Clustering (RNSC, Super Paramagnetic Clustering (SPC, and Molecular Complex Detection (MCODE. Results A test graph was built on the basis of 220 complexes annotated in the MIPS database. To evaluate the robustness to false positives and false negatives, we derived 41 altered graphs by randomly removing edges from or adding edges to the test graph in various proportions. Each clustering algorithm was applied to these graphs with various parameter settings, and the clusters were compared with the annotated complexes. We analyzed the sensitivity of the algorithms to the parameters and determined their optimal parameter values. We also evaluated their robustness to alterations of the test graph. We then applied the four algorithms to six graphs obtained from high-throughput experiments and compared the resulting clusters with the annotated complexes. Conclusion This

  5. Robust adaptive gradient-descent training algorithm for recurrent neural networks in discrete time domain.

    Science.gov (United States)

    Song, Qing; Wu, Yilei; Soh, Yeng Chai

    2008-11-01

    For a recurrent neural network (RNN), its transient response is a critical issue, especially for real-time signal processing applications. The conventional RNN training algorithms, such as backpropagation through time (BPTT) and real-time recurrent learning (RTRL), have not adequately addressed this problem because they suffer from low convergence speed. While increasing the learning rate may help to improve the performance of the RNN, it can result in unstable training in terms of weight divergence. Therefore, an optimal tradeoff between RNN training speed and weight convergence is desired. In this paper, a robust adaptive gradient-descent (RAGD) training algorithm of RNN is developed based on a novel RNN hybrid training concept. It switches the training patterns between standard real-time online backpropagation (BP) and RTRL according to the derived convergence and stability conditions. The weight convergence and L(2)-stability of the algorithm are derived via the conic sector theorem. The optimized adaptive learning maximizes the training speed of the RNN for each weight update without violating the stability and convergence criteria. Computer simulations are carried out to demonstrate the applicability of the theoretical results.

  6. Congestion Relief of Contingent Power Network with Evolutionary Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Abhinandan De

    2012-03-01

    Full Text Available This paper presents a differential evolution optimization technique based methodology for congestion management cost optimization of contingent power networks. In Deregulated systems, line congestion apart from causing stability problems can increase the cost of electricity. Restraining line flow to a particular level of congestion is quite imperative from stability as well as economy point of view. Employing Congestion Sensitivity Index proposed in this paper, the algorithm proposed can be adopted for selecting the congested lines in a power networks and then to search for a congestion constrained optimal generation schedule at the cost of a minimum congestion management charge without any load curtailment and installation of FACTS devices. It has been depicted that the methodology on application can provide better operating conditions in terms of improvement of bus voltage and loss profile of the system. The efficiency of the proposed methodology has been tested on an IEEE 30 bus benchmark system and the results look promising.

  7. APPLYING ARTIFICIAL NEURAL NETWORK OPTIMIZED BY FIREWORKS ALGORITHM FOR STOCK PRICE ESTIMATION

    Directory of Open Access Journals (Sweden)

    Khuat Thanh Tung

    2016-04-01

    Full Text Available Stock prediction is to determine the future value of a company stock dealt on an exchange. It plays a crucial role to raise the profit gained by firms and investors. Over the past few years, many methods have been developed in which plenty of efforts focus on the machine learning framework achieving the promising results. In this paper, an approach based on Artificial Neural Network (ANN optimized by Fireworks algorithm and data preprocessing by Haar Wavelet is applied to estimate the stock prices. The system was trained and tested with real data of various companies collected from Yahoo Finance. The obtained results are encouraging.

  8. Odd-graceful labeling algorithm and its implementation of generalized ring core network

    Science.gov (United States)

    Xie, Jianmin; Hong, Wenmei; Zhao, Tinggang; Yao, Bing

    2017-08-01

    The computer implementation of some labeling algorithms of special networks has practical guiding significance to computer communication network system design of functional, reliability, low communication cost. Generalized ring core network is a very important hybrid network topology structure and it is the basis of generalized ring network. In this paper, based on the requirements of research of generalized ring network addressing, the author has designed the odd-graceful labeling algorithm of generalized ring core network when n1, n2,…nm ≡ 0(mod 4), proved odd-graceful of the structure, worked out the corresponding software, and shown the practical effectiveness of this algorithm with our experimental data.

  9. Learning Bayesian networks from big meteorological spatial datasets. An alternative to complex network analysis

    Science.gov (United States)

    Gutiérrez, Jose Manuel; San Martín, Daniel; Herrera, Sixto; Santiago Cofiño, Antonio

    2016-04-01

    The growing availability of spatial datasets (observations, reanalysis, and regional and global climate models) demands efficient multivariate spatial modeling techniques for many problems of interest (e.g. teleconnection analysis, multi-site downscaling, etc.). Complex networks have been recently applied in this context using graphs built from pairwise correlations between the different stations (or grid boxes) forming the dataset. However, this analysis does not take into account the full dependence structure underlying the data, gien by all possible marginal and conditional dependencies among the stations, and does not allow a probabilistic analysis of the dataset. In this talk we introduce Bayesian networks as an alternative multivariate analysis and modeling data-driven technique which allows building a joint probability distribution of the stations including all relevant dependencies in the dataset. Bayesian networks is a sound machine learning technique using a graph to 1) encode the main dependencies among the variables and 2) to obtain a factorization of the joint probability distribution of the stations given by a reduced number of parameters. For a particular problem, the resulting graph provides a qualitative analysis of the spatial relationships in the dataset (alternative to complex network analysis), and the resulting model allows for a probabilistic analysis of the dataset. Bayesian networks have been widely applied in many fields, but their use in climate problems is hampered by the large number of variables (stations) involved in this field, since the complexity of the existing algorithms to learn from data the graphical structure grows nonlinearly with the number of variables. In this contribution we present a modified local learning algorithm for Bayesian networks adapted to this problem, which allows inferring the graphical structure for thousands of stations (from observations) and/or gridboxes (from model simulations) thus providing new

  10. Function-Oriented Networking and On-Demand Routing System in Network Using Ant Colony Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Young-Bo Sim

    2017-11-01

    Full Text Available In this paper, we proposed and developed Function-Oriented Networking (FON, a platform for network users. It has a different philosophy as opposed to technologies for network managers of Software-Defined Networking technology, OpenFlow. It is a technology that can immediately reflect the demands of the network users in the network, unlike the existing OpenFlow and Network Functions Virtualization (NFV, which do not reflect directly the needs of the network users. It allows the network user to determine the policy of the direct network, so it can be applied more precisely than the policy applied by the network manager. This is expected to increase the satisfaction of the service users when the network users try to provide new services. We developed FON function that performs on-demand routing for Low-Delay Required service. We analyzed the characteristics of the Ant Colony Optimization (ACO algorithm and found that the algorithm is suitable for low-delay required services. It was also the first in the world to implement the routing software using ACO Algorithm in the real Ethernet network. In order to improve the routing performance, several algorithms of the ACO Algorithm have been developed to enable faster path search-routing and path recovery. The relationship between the network performance index and the ACO routing parameters is derived, and the results are compared and analyzed. Through this, it was possible to develop the ACO algorithm.

  11. DNA Cryptography and Deep Learning using Genetic Algorithm with NW algorithm for Key Generation.

    Science.gov (United States)

    Kalsi, Shruti; Kaur, Harleen; Chang, Victor

    2017-12-05

    Cryptography is not only a science of applying complex mathematics and logic to design strong methods to hide data called as encryption, but also to retrieve the original data back, called decryption. The purpose of cryptography is to transmit a message between a sender and receiver such that an eavesdropper is unable to comprehend it. To accomplish this, not only we need a strong algorithm, but a strong key and a strong concept for encryption and decryption process. We have introduced a concept of DNA Deep Learning Cryptography which is defined as a technique of concealing data in terms of DNA sequence and deep learning. In the cryptographic technique, each alphabet of a letter is converted into a different combination of the four bases, namely; Adenine (A), Cytosine (C), Guanine (G) and Thymine (T), which make up the human deoxyribonucleic acid (DNA). Actual implementations with the DNA don't exceed laboratory level and are expensive. To bring DNA computing on a digital level, easy and effective algorithms are proposed in this paper. In proposed work we have introduced firstly, a method and its implementation for key generation based on the theory of natural selection using Genetic Algorithm with Needleman-Wunsch (NW) algorithm and Secondly, a method for implementation of encryption and decryption based on DNA computing using biological operations Transcription, Translation, DNA Sequencing and Deep Learning.

  12. Recurrent neural network-based modeling of gene regulatory network using elephant swarm water search algorithm.

    Science.gov (United States)

    Mandal, Sudip; Saha, Goutam; Pal, Rajat Kumar

    2017-08-01

    Correct inference of genetic regulations inside a cell from the biological database like time series microarray data is one of the greatest challenges in post genomic era for biologists and researchers. Recurrent Neural Network (RNN) is one of the most popular and simple approach to model the dynamics as well as to infer correct dependencies among genes. Inspired by the behavior of social elephants, we propose a new metaheuristic namely Elephant Swarm Water Search Algorithm (ESWSA) to infer Gene Regulatory Network (GRN). This algorithm is mainly based on the water search strategy of intelligent and social elephants during drought, utilizing the different types of communication techniques. Initially, the algorithm is tested against benchmark small and medium scale artificial genetic networks without and with presence of different noise levels and the efficiency was observed in term of parametric error, minimum fitness value, execution time, accuracy of prediction of true regulation, etc. Next, the proposed algorithm is tested against the real time gene expression data of Escherichia Coli SOS Network and results were also compared with others state of the art optimization methods. The experimental results suggest that ESWSA is very efficient for GRN inference problem and performs better than other methods in many ways.

  13. Cascaded VLSI Chips Help Neural Network To Learn

    Science.gov (United States)

    Duong, Tuan A.; Daud, Taher; Thakoor, Anilkumar P.

    1993-01-01

    Cascading provides 12-bit resolution needed for learning. Using conventional silicon chip fabrication technology of VLSI, fully connected architecture consisting of 32 wide-range, variable gain, sigmoidal neurons along one diagonal and 7-bit resolution, electrically programmable, synaptic 32 x 31 weight matrix implemented on neuron-synapse chip. To increase weight nominally from 7 to 13 bits, synapses on chip individually cascaded with respective synapses on another 32 x 32 matrix chip with 7-bit resolution synapses only (without neurons). Cascade correlation algorithm varies number of layers effectively connected into network; adds hidden layers one at a time during learning process in such way as to optimize overall number of neurons and complexity and configuration of network.

  14. Evolutionary epistemology and dynamical virtual learning networks.

    Science.gov (United States)

    Giani, Umberto

    2004-01-01

    This paper is an attempt to define the main features of a new educational model aimed at satisfying the needs of a rapidly changing society. The evolutionary epistemology paradigm of culture diffusion in human groups could be the conceptual ground for the development of this model. Multidimensionality, multi-disciplinarity, complexity, connectivity, critical thinking, creative thinking, constructivism, flexible learning, contextual learning, are the dimensions that should characterize distance learning models aimed at increasing the epistemological variability of learning communities. Two multimedia educational software, Dynamic Knowledge Networks (DKN) and Dynamic Virtual Learning Networks (DVLN) are described. These two complementary tools instantiate these dimensions, and were tested in almost 150 online courses. Even if the examples are framed in the medical context, the analysis of the shortcomings of the traditional educational systems and the proposed solutions can be applied to the vast majority of the educational contexts.

  15. Fast learning method for convolutional neural networks using extreme learning machine and its application to lane detection.

    Science.gov (United States)

    Kim, Jihun; Kim, Jonghong; Jang, Gil-Jin; Lee, Minho

    2017-03-01

    Deep learning has received significant attention recently as a promising solution to many problems in the area of artificial intelligence. Among several deep learning architectures, convolutional neural networks (CNNs) demonstrate superior performance when compared to other machine learning methods in the applications of object detection and recognition. We use a CNN for image enhancement and the detection of driving lanes on motorways. In general, the process of lane detection consists of edge extraction and line detection. A CNN can be used to enhance the input images before lane detection by excluding noise and obstacles that are irrelevant to the edge detection result. However, training conventional CNNs requires considerable computation and a big dataset. Therefore, we suggest a new learning algorithm for CNNs using an extreme learning machine (ELM). The ELM is a fast learning method used to calculate network weights between output and hidden layers in a single iteration and thus, can dramatically reduce learning time while producing accurate results with minimal training data. A conventional ELM can be applied to networks with a single hidden layer; as such, we propose a stacked ELM architecture in the CNN framework. Further, we modify the backpropagation algorithm to find the targets of hidden layers and effectively learn network weights while maintaining performance. Experimental results confirm that the proposed method is effective in reducing learning time and improving performance. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Peer Apprenticeship Learning in Networked Learning Communities: The Diffusion of Epistemic Learning

    Science.gov (United States)

    Jamaludin, Azilawati; Shaari, Imran

    2016-01-01

    This article discusses peer apprenticeship learning (PAL) as situated within networked learning communities (NLCs). The context revolves around the diffusion of technologically-mediated learning in Singapore schools, where teachers begin to implement inquiry-oriented learning, consistent with 21st century learning, among students. As these schools…

  17. Genetic Algorithm Optimization of Artificial Neural Networks for Hydrological Modelling

    Science.gov (United States)

    Abrahart, R. J.

    2004-05-01

    This paper will consider the case for genetic algorithm optimization in the development of an artificial neural network model. It will provide a methodological evaluation of reported investigations with respect to hydrological forecasting and prediction. The intention in such operations is to develop a superior modelling solution that will be: \\begin{itemize} more accurate in terms of output precision and model estimation skill; more tractable in terms of personal requirements and end-user control; and/or more robust in terms of conceptual and mechanical power with respect to adverse conditions. The genetic algorithm optimization toolbox could be used to perform a number of specific roles or purposes and it is the harmonious and supportive relationship between neural networks and genetic algorithms that will be highlighted and assessed. There are several neural network mechanisms and procedures that could be enhanced and potential benefits are possible at different stages in the design and construction of an operational hydrological model e.g. division of inputs; identification of structure; initialization of connection weights; calibration of connection weights; breeding operations between successful models; and output fusion associated with the development of ensemble solutions. Each set of opportunities will be discussed and evaluated. Two strategic questions will also be considered: [i] should optimization be conducted as a set of small individual procedures or as one large holistic operation; [ii] what specific function or set of weighted vectors should be optimized in a complex software product e.g. timings, volumes, or quintessential hydrological attributes related to the 'problem situation' - that might require the development flood forecasting, drought estimation, or record infilling applications. The paper will conclude with a consideration of hydrological forecasting solutions developed on the combined methodologies of co-operative co-evolution and

  18. Combining neural networks and genetic algorithms for hydrological flow forecasting

    Science.gov (United States)

    Neruda, Roman; Srejber, Jan; Neruda, Martin; Pascenko, Petr

    2010-05-01

    We present a neural network approach to rainfall-runoff modeling for small size river basins based on several time series of hourly measured data. Different neural networks are considered for short time runoff predictions (from one to six hours lead time) based on runoff and rainfall data observed in previous time steps. Correlation analysis shows that runoff data, short time rainfall history, and aggregated API values are the most significant data for the prediction. Neural models of multilayer perceptron and radial basis function networks with different numbers of units are used and compared with more traditional linear time series predictors. Out of possible 48 hours of relevant history of all the input variables, the most important ones are selected by means of input filters created by a genetic algorithm. The genetic algorithm works with population of binary encoded vectors defining input selection patterns. Standard genetic operators of two-point crossover, random bit-flipping mutation, and tournament selection were used. The evaluation of objective function of each individual consists of several rounds of building and testing a particular neural network model. The whole procedure is rather computational exacting (taking hours to days on a desktop PC), thus a high-performance mainframe computer has been used for our experiments. Results based on two years worth data from the Ploucnice river in Northern Bohemia suggest that main problems connected with this approach to modeling are ovetraining that can lead to poor generalization, and relatively small number of extreme events which makes it difficult for a model to predict the amplitude of the event. Thus, experiments with both absolute and relative runoff predictions were carried out. In general it can be concluded that the neural models show about 5 per cent improvement in terms of efficiency coefficient over liner models. Multilayer perceptrons with one hidden layer trained by back propagation algorithm and

  19. Online Incremental Learning for High Bandwidth Network Traffic Classification

    Directory of Open Access Journals (Sweden)

    H. R. Loo

    2016-01-01

    Full Text Available Data stream mining techniques are able to classify evolving data streams such as network traffic in the presence of concept drift. In order to classify high bandwidth network traffic in real-time, data stream mining classifiers need to be implemented on reconfigurable high throughput platform, such as Field Programmable Gate Array (FPGA. This paper proposes an algorithm for online network traffic classification based on the concept of incremental k-means clustering to continuously learn from both labeled and unlabeled flow instances. Two distance measures for incremental k-means (Euclidean and Manhattan distance are analyzed to measure their impact on the network traffic classification in the presence of concept drift. The experimental results on real datasets show that the proposed algorithm exhibits consistency, up to 94% average accuracy for both distance measures, even in the presence of concept drifts. The proposed incremental k-means classification using Manhattan distance can classify network traffic 3 times faster than Euclidean distance at 671 thousands flow instances per second.

  20. Maximum entropy methods for extracting the learned features of deep neural networks.

    Science.gov (United States)

    Finnegan, Alex; Song, Jun S

    2017-10-01

    New architectures of multilayer artificial neural networks and new methods for training them are rapidly revolutionizing the application of machine learning in diverse fields, including business, social science, physical sciences, and biology. Interpreting deep neural networks, however, currently remains elusive, and a critical challenge lies in understanding which meaningful features a network is actually learning. We present a general method for interpreting deep neural networks and extracting network-learned features from input data. We describe our algorithm in the context of biological sequence analysis. Our approach, based on ideas from statistical physics, samples from the maximum entropy distribution over possible sequences, anchored at an input sequence and subject to constraints implied by the empirical function learned by a network. Using our framework, we demonstrate that local transcription factor binding motifs can be identified from a network trained on ChIP-seq data and that nucleosome positioning signals are indeed learned by a network trained on chemical cleavage nucleosome maps. Imposing a further constraint on the maximum entropy distribution also allows us to probe whether a network is learning global sequence features, such as the high GC content in nucleosome-rich regions. This work thus provides valuable mathematical tools for interpreting and extracting learned features from feed-forward neural networks.